Sample records for joint probability distribution

  1. Metocean design parameter estimation for fixed platform based on copula functions

    NASA Astrophysics Data System (ADS)

    Zhai, Jinjin; Yin, Qilin; Dong, Sheng

    2017-08-01

    Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.

  2. Joint probabilities and quantum cognition

    NASA Astrophysics Data System (ADS)

    de Barros, J. Acacio

    2012-12-01

    In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

  3. Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory

    NASA Astrophysics Data System (ADS)

    Rahimi, A.; Zhang, L.

    2012-12-01

    Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further assure that the entropy-based joint rainfall-runoff distribution are satisfactorily derived. Overall, the study shows the Shannon entropy theory can be satisfactorily applied to model the dependence between rainfall and runoff. The study also shows that the entropy-based joint distribution is an appropriate approach to capture the dependence structure that cannot be captured by the convenient bivariate joint distributions. Joint Rainfall-Runoff Entropy Based PDF, and Corresponding Marginal PDF and Histogram for W12 Watershed The K-S Test Result and RMSE on Univariate Distributions Derived from the Maximum Entropy Based Joint Probability Distribution;

  4. Positive phase space distributions and uncertainty relations

    NASA Technical Reports Server (NTRS)

    Kruger, Jan

    1993-01-01

    In contrast to a widespread belief, Wigner's theorem allows the construction of true joint probabilities in phase space for distributions describing the object system as well as for distributions depending on the measurement apparatus. The fundamental role of Heisenberg's uncertainty relations in Schroedinger form (including correlations) is pointed out for these two possible interpretations of joint probability distributions. Hence, in order that a multivariate normal probability distribution in phase space may correspond to a Wigner distribution of a pure or a mixed state, it is necessary and sufficient that Heisenberg's uncertainty relation in Schroedinger form should be satisfied.

  5. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    PubMed

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  6. Encounter risk analysis of rainfall and reference crop evapotranspiration in the irrigation district

    NASA Astrophysics Data System (ADS)

    Zhang, Jinping; Lin, Xiaomin; Zhao, Yong; Hong, Yang

    2017-09-01

    Rainfall and reference crop evapotranspiration are random but mutually affected variables in the irrigation district, and their encounter situation can determine water shortage risks under the contexts of natural water supply and demand. However, in reality, the rainfall and reference crop evapotranspiration may have different marginal distributions and their relations are nonlinear. In this study, based on the annual rainfall and reference crop evapotranspiration data series from 1970 to 2013 in the Luhun irrigation district of China, the joint probability distribution of rainfall and reference crop evapotranspiration are developed with the Frank copula function. Using the joint probability distribution, the synchronous-asynchronous encounter risk, conditional joint probability, and conditional return period of different combinations of rainfall and reference crop evapotranspiration are analyzed. The results show that the copula-based joint probability distributions of rainfall and reference crop evapotranspiration are reasonable. The asynchronous encounter probability of rainfall and reference crop evapotranspiration is greater than their synchronous encounter probability, and the water shortage risk associated with meteorological drought (i.e. rainfall variability) is more prone to appear. Compared with other states, there are higher conditional joint probability and lower conditional return period in either low rainfall or high reference crop evapotranspiration. For a specifically high reference crop evapotranspiration with a certain frequency, the encounter risk of low rainfall and high reference crop evapotranspiration is increased with the decrease in frequency. For a specifically low rainfall with a certain frequency, the encounter risk of low rainfall and high reference crop evapotranspiration is decreased with the decrease in frequency. When either the high reference crop evapotranspiration exceeds a certain frequency or low rainfall does not exceed a certain frequency, the higher conditional joint probability and lower conditional return period of various combinations likely cause a water shortage, but the water shortage is not severe.

  7. Classic Maximum Entropy Recovery of the Average Joint Distribution of Apparent FRET Efficiency and Fluorescence Photons for Single-molecule Burst Measurements

    PubMed Central

    DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.

    2012-01-01

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions. PMID:22338694

  8. Copula Models for Sociology: Measures of Dependence and Probabilities for Joint Distributions

    ERIC Educational Resources Information Center

    Vuolo, Mike

    2017-01-01

    Often in sociology, researchers are confronted with nonnormal variables whose joint distribution they wish to explore. Yet, assumptions of common measures of dependence can fail or estimating such dependence is computationally intensive. This article presents the copula method for modeling the joint distribution of two random variables, including…

  9. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  10. Bivariate normal, conditional and rectangular probabilities: A computer program with applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.

    1980-01-01

    Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.

  11. Quasi-probabilities in conditioned quantum measurement and a geometric/statistical interpretation of Aharonov's weak value

    NASA Astrophysics Data System (ADS)

    Lee, Jaeha; Tsutsui, Izumi

    2017-05-01

    We show that the joint behavior of an arbitrary pair of (generally noncommuting) quantum observables can be described by quasi-probabilities, which are an extended version of the standard probabilities used for describing the outcome of measurement for a single observable. The physical situations that require these quasi-probabilities arise when one considers quantum measurement of an observable conditioned by some other variable, with the notable example being the weak measurement employed to obtain Aharonov's weak value. Specifically, we present a general prescription for the construction of quasi-joint probability (QJP) distributions associated with a given combination of observables. These QJP distributions are introduced in two complementary approaches: one from a bottom-up, strictly operational construction realized by examining the mathematical framework of the conditioned measurement scheme, and the other from a top-down viewpoint realized by applying the results of the spectral theorem for normal operators and their Fourier transforms. It is then revealed that, for a pair of simultaneously measurable observables, the QJP distribution reduces to the unique standard joint probability distribution of the pair, whereas for a noncommuting pair there exists an inherent indefiniteness in the choice of such QJP distributions, admitting a multitude of candidates that may equally be used for describing the joint behavior of the pair. In the course of our argument, we find that the QJP distributions furnish the space of operators in the underlying Hilbert space with their characteristic geometric structures such that the orthogonal projections and inner products of observables can be given statistical interpretations as, respectively, “conditionings” and “correlations”. The weak value Aw for an observable A is then given a geometric/statistical interpretation as either the orthogonal projection of A onto the subspace generated by another observable B, or equivalently, as the conditioning of A given B with respect to the QJP distribution under consideration.

  12. Climate Change Impact Assessment in Pacific North West Using Copula based Coupling of Temperature and Precipitation variables

    NASA Astrophysics Data System (ADS)

    Qin, Y.; Rana, A.; Moradkhani, H.

    2014-12-01

    The multi downscaled-scenario products allow us to better assess the uncertainty of the changes/variations of precipitation and temperature in the current and future periods. Joint Probability distribution functions (PDFs), of both the climatic variables, might help better understand the interdependence of the two, and thus in-turn help in accessing the future with confidence. Using the joint distribution of temperature and precipitation is also of significant importance in hydrological applications and climate change studies. In the present study, we have used multi-modelled statistically downscaled-scenario ensemble of precipitation and temperature variables using 2 different statistically downscaled climate dataset. The datasets used are, 10 Global Climate Models (GCMs) downscaled products from CMIP5 daily dataset, namely, those from the Bias Correction and Spatial Downscaling (BCSD) technique generated at Portland State University and from the Multivariate Adaptive Constructed Analogs (MACA) technique, generated at University of Idaho, leading to 2 ensemble time series from 20 GCM products. Thereafter the ensemble PDFs of both precipitation and temperature is evaluated for summer, winter, and yearly periods for all the 10 sub-basins across Columbia River Basin (CRB). Eventually, Copula is applied to establish the joint distribution of two variables enabling users to model the joint behavior of the variables with any level of correlation and dependency. Moreover, the probabilistic distribution helps remove the limitations on marginal distributions of variables in question. The joint distribution is then used to estimate the change trends of the joint precipitation and temperature in the current and future, along with estimation of the probabilities of the given change. Results have indicated towards varied change trends of the joint distribution of, summer, winter, and yearly time scale, respectively in all 10 sub-basins. Probabilities of changes, as estimated by the joint precipitation and temperature, will provide useful information/insights for hydrological and climate change predictions.

  13. Idealized models of the joint probability distribution of wind speeds

    NASA Astrophysics Data System (ADS)

    Monahan, Adam H.

    2018-05-01

    The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.

  14. On probability-possibility transformations

    NASA Technical Reports Server (NTRS)

    Klir, George J.; Parviz, Behzad

    1992-01-01

    Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.

  15. Experimental investigation of the intensity fluctuation joint probability and conditional distributions of the twin-beam quantum state.

    PubMed

    Zhang, Yun; Kasai, Katsuyuki; Watanabe, Masayoshi

    2003-01-13

    We give the intensity fluctuation joint probability of the twin-beam quantum state, which was generated with an optical parametric oscillator operating above threshold. Then we present what to our knowledge is the first measurement of the intensity fluctuation conditional probability distributions of twin beams. The measured inference variance of twin beams 0.62+/-0.02, which is less than the standard quantum limit of unity, indicates inference with a precision better than that of separable states. The measured photocurrent variance exhibits a quantum correlation of as much as -4.9+/-0.2 dB between the signal and the idler.

  16. Multi-hazard Assessment and Scenario Toolbox (MhAST): A Framework for Analyzing Compounding Effects of Multiple Hazards

    NASA Astrophysics Data System (ADS)

    Sadegh, M.; Moftakhari, H.; AghaKouchak, A.

    2017-12-01

    Many natural hazards are driven by multiple forcing variables, and concurrence/consecutive extreme events significantly increases risk of infrastructure/system failure. It is a common practice to use univariate analysis based upon a perceived ruling driver to estimate design quantiles and/or return periods of extreme events. A multivariate analysis, however, permits modeling simultaneous occurrence of multiple forcing variables. In this presentation, we introduce the Multi-hazard Assessment and Scenario Toolbox (MhAST) that comprehensively analyzes marginal and joint probability distributions of natural hazards. MhAST also offers a wide range of scenarios of return period and design levels and their likelihoods. Contribution of this study is four-fold: 1. comprehensive analysis of marginal and joint probability of multiple drivers through 17 continuous distributions and 26 copulas, 2. multiple scenario analysis of concurrent extremes based upon the most likely joint occurrence, one ruling variable, and weighted random sampling of joint occurrences with similar exceedance probabilities, 3. weighted average scenario analysis based on a expected event, and 4. uncertainty analysis of the most likely joint occurrence scenario using a Bayesian framework.

  17. Tomographic measurement of joint photon statistics of the twin-beam quantum state

    PubMed

    Vasilyev; Choi; Kumar; D'Ariano

    2000-03-13

    We report the first measurement of the joint photon-number probability distribution for a two-mode quantum state created by a nondegenerate optical parametric amplifier. The measured distributions exhibit up to 1.9 dB of quantum correlation between the signal and idler photon numbers, whereas the marginal distributions are thermal as expected for parametric fluorescence.

  18. New approach in bivariate drought duration and severity analysis

    NASA Astrophysics Data System (ADS)

    Montaseri, Majid; Amirataee, Babak; Rezaie, Hossein

    2018-04-01

    The copula functions have been widely applied as an advance technique to create joint probability distribution of drought duration and severity. The approach of data collection as well as the amount of data and dispersion of data series can last a significant impact on creating such joint probability distribution using copulas. Usually, such traditional analyses have shed an Unconnected Drought Runs (UDR) approach towards droughts. In other word, droughts with different durations would be independent of each other. Emphasis on such data collection method causes the omission of actual potentials of short-term extreme droughts located within a long-term UDR. Meanwhile, traditional method is often faced with significant gap in drought data series. However, a long-term UDR can be approached as a combination of short-term Connected Drought Runs (CDR). Therefore this study aims to evaluate systematically two UDR and CDR procedures in joint probability of drought duration and severity investigations. For this purpose, rainfall data (1971-2013) from 24 rain gauges in Lake Urmia basin, Iran were applied. Also, seven common univariate marginal distributions and seven types of bivariate copulas were examined. Compared to traditional approach, the results demonstrated a significant comparative advantage of the new approach. Such comparative advantages led to determine the correct copula function, more accurate estimation of copula parameter, more realistic estimation of joint/conditional probabilities of drought duration and severity and significant reduction in uncertainty for modeling.

  19. On the motion of classical three-body system with consideration of quantum fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gevorkyan, A. S., E-mail: g-ashot@sci.am

    2017-03-15

    We obtained the systemof stochastic differential equations which describes the classicalmotion of the three-body system under influence of quantum fluctuations. Using SDEs, for the joint probability distribution of the total momentum of bodies system were obtained the partial differential equation of the second order. It is shown, that the equation for the probability distribution is solved jointly by classical equations, which in turn are responsible for the topological peculiarities of tubes of quantum currents, transitions between asymptotic channels and, respectively for arising of quantum chaos.

  20. Observation of non-classical correlations in sequential measurements of photon polarization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.

    2016-10-01

    A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.

  1. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    PubMed Central

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2014-01-01

    Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016

  2. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution.

    PubMed

    Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.

  3. Quasi-Bell inequalities from symmetrized products of noncommuting qubit observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamel, Omar E.; Fleming, Graham R.

    Noncommuting observables cannot be simultaneously measured; however, under local hidden variable models, they must simultaneously hold premeasurement values, implying the existence of a joint probability distribution. We study the joint distributions of noncommuting observables on qubits, with possible criteria of positivity and the Fréchet bounds limiting the joint probabilities, concluding that the latter may be negative. We use symmetrization, justified heuristically and then more carefully via the Moyal characteristic function, to find the quantum operator corresponding to the product of noncommuting observables. This is then used to construct Quasi-Bell inequalities, Bell inequalities containing products of noncommuting observables, on two qubits.more » These inequalities place limits on the local hidden variable models that define joint probabilities for noncommuting observables. We also found that the Quasi-Bell inequalities have a quantum to classical violation as high as 3/2 on two qubit, higher than conventional Bell inequalities. Our result demonstrates the theoretical importance of noncommutativity in the nonlocality of quantum mechanics and provides an insightful generalization of Bell inequalities.« less

  4. Quasi-Bell inequalities from symmetrized products of noncommuting qubit observables

    DOE PAGES

    Gamel, Omar E.; Fleming, Graham R.

    2017-05-01

    Noncommuting observables cannot be simultaneously measured; however, under local hidden variable models, they must simultaneously hold premeasurement values, implying the existence of a joint probability distribution. We study the joint distributions of noncommuting observables on qubits, with possible criteria of positivity and the Fréchet bounds limiting the joint probabilities, concluding that the latter may be negative. We use symmetrization, justified heuristically and then more carefully via the Moyal characteristic function, to find the quantum operator corresponding to the product of noncommuting observables. This is then used to construct Quasi-Bell inequalities, Bell inequalities containing products of noncommuting observables, on two qubits.more » These inequalities place limits on the local hidden variable models that define joint probabilities for noncommuting observables. We also found that the Quasi-Bell inequalities have a quantum to classical violation as high as 3/2 on two qubit, higher than conventional Bell inequalities. Our result demonstrates the theoretical importance of noncommutativity in the nonlocality of quantum mechanics and provides an insightful generalization of Bell inequalities.« less

  5. Multivariate hydrological frequency analysis for extreme events using Archimedean copula. Case study: Lower Tunjuelo River basin (Colombia)

    NASA Astrophysics Data System (ADS)

    Gómez, Wilmar

    2017-04-01

    By analyzing the spatial and temporal variability of extreme precipitation events we can prevent or reduce the threat and risk. Many water resources projects require joint probability distributions of random variables such as precipitation intensity and duration, which can not be independent with each other. The problem of defining a probability model for observations of several dependent variables is greatly simplified by the joint distribution in terms of their marginal by taking copulas. This document presents a general framework set frequency analysis bivariate and multivariate using Archimedean copulas for extreme events of hydroclimatological nature such as severe storms. This analysis was conducted in the lower Tunjuelo River basin in Colombia for precipitation events. The results obtained show that for a joint study of the intensity-duration-frequency, IDF curves can be obtained through copulas and thus establish more accurate and reliable information from design storms and associated risks. It shows how the use of copulas greatly simplifies the study of multivariate distributions that introduce the concept of joint return period used to represent the needs of hydrological designs properly in frequency analysis.

  6. The return period analysis of natural disasters with statistical modeling of bivariate joint probability distribution.

    PubMed

    Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng

    2013-01-01

    New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis. © 2012 Society for Risk Analysis.

  7. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, J; Fan, J; Hu, W

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less

  8. Understanding the joint behavior of temperature and precipitation for climate change impact studies

    NASA Astrophysics Data System (ADS)

    Rana, Arun; Moradkhani, Hamid; Qin, Yueyue

    2017-07-01

    The multiple downscaled scenario products allow us to assess the uncertainty of the variations of precipitation and temperature in the current and future periods. Probabilistic assessments of both climatic variables help better understand the interdependence of the two and thus, in turn, help in assessing the future with confidence. In the present study, we use ensemble of statistically downscaled precipitation and temperature from various models. The dataset used is multi-model ensemble of 10 global climate models (GCMs) downscaled product from CMIP5 daily dataset using the Bias Correction and Spatial Downscaling (BCSD) technique, generated at Portland State University. The multi-model ensemble of both precipitation and temperature is evaluated for dry and wet periods for 10 sub-basins across Columbia River Basin (CRB). Thereafter, copula is applied to establish the joint distribution of two variables on multi-model ensemble data. The joint distribution is then used to estimate the change in trends of said variables in future, along with estimation of the probabilities of the given change. The joint distribution trends vary, but certainly positive, for dry and wet periods in sub-basins of CRB. Dry season, generally, is indicating a higher positive change in precipitation than temperature (as compared to historical) across sub-basins with wet season inferring otherwise. Probabilities of changes in future, as estimated from the joint distribution, indicate varied degrees and forms during dry season whereas the wet season is rather constant across all the sub-basins.

  9. Optimal Information Processing in Biochemical Networks

    NASA Astrophysics Data System (ADS)

    Wiggins, Chris

    2012-02-01

    A variety of experimental results over the past decades provide examples of near-optimal information processing in biological networks, including in biochemical and transcriptional regulatory networks. Computing information-theoretic quantities requires first choosing or computing the joint probability distribution describing multiple nodes in such a network --- for example, representing the probability distribution of finding an integer copy number of each of two interacting reactants or gene products while respecting the `intrinsic' small copy number noise constraining information transmission at the scale of the cell. I'll given an overview of some recent analytic and numerical work facilitating calculation of such joint distributions and the associated information, which in turn makes possible numerical optimization of information flow in models of noisy regulatory and biochemical networks. Illustrating cases include quantification of form-function relations, ideal design of regulatory cascades, and response to oscillatory driving.

  10. Technology-Enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    ERIC Educational Resources Information Center

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like "What is the chance of event A occurring, given that event B was observed?" This generic question arises in discussions of many intriguing scientific questions such as "What is the probability that an adolescent weighs between 120 and 140 pounds given that…

  11. A Bayesian joint probability modeling approach for seasonal forecasting of streamflows at multiple sites

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.

    2009-05-01

    Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.

  12. Joint analysis of air pollution in street canyons in St. Petersburg and Copenhagen

    NASA Astrophysics Data System (ADS)

    Genikhovich, E. L.; Ziv, A. D.; Iakovleva, E. A.; Palmgren, F.; Berkowicz, R.

    The bi-annual data set of concentrations of several traffic-related air pollutants, measured continuously in street canyons in St. Petersburg and Copenhagen, is analysed jointly using different statistical techniques. Annual mean concentrations of NO 2, NO x and, especially, benzene are found systematically higher in St. Petersburg than in Copenhagen but for ozone the situation is opposite. In both cities probability distribution functions (PDFs) of concentrations and their daily or weekly extrema are fitted with the Weibull and double exponential distributions, respectively. Sample estimates of bi-variate distributions of concentrations, concentration roses, and probabilities of concentration of one pollutant being extreme given that another one reaches its extremum are presented in this paper as well as auto- and co-spectra. It is demonstrated that there is a reasonably high correlation between seasonally averaged concentrations of pollutants in St. Petersburg and Copenhagen.

  13. General formulation of long-range degree correlations in complex networks

    NASA Astrophysics Data System (ADS)

    Fujiki, Yuka; Takaguchi, Taro; Yakubo, Kousuke

    2018-06-01

    We provide a general framework for analyzing degree correlations between nodes separated by more than one step (i.e., beyond nearest neighbors) in complex networks. One joint and four conditional probability distributions are introduced to fully describe long-range degree correlations with respect to degrees k and k' of two nodes and shortest path length l between them. We present general relations among these probability distributions and clarify the relevance to nearest-neighbor degree correlations. Unlike nearest-neighbor correlations, some of these probability distributions are meaningful only in finite-size networks. Furthermore, as a baseline to determine the existence of intrinsic long-range degree correlations in a network other than inevitable correlations caused by the finite-size effect, the functional forms of these probability distributions for random networks are analytically evaluated within a mean-field approximation. The utility of our argument is demonstrated by applying it to real-world networks.

  14. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  15. Exact joint density-current probability function for the asymmetric exclusion process.

    PubMed

    Depken, Martin; Stinchcombe, Robin

    2004-07-23

    We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society

  16. A short walk in quantum probability

    NASA Astrophysics Data System (ADS)

    Hudson, Robin

    2018-04-01

    This is a personal survey of aspects of quantum probability related to the Heisenberg commutation relation for canonical pairs. Using the failure, in general, of non-negativity of the Wigner distribution for canonical pairs to motivate a more satisfactory quantum notion of joint distribution, we visit a central limit theorem for such pairs and a resulting family of quantum planar Brownian motions which deform the classical planar Brownian motion, together with a corresponding family of quantum stochastic areas. This article is part of the themed issue `Hilbert's sixth problem'.

  17. Two tandem queues with general renewal input. 2: Asymptotic expansions for the diffusion model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knessl, C.; Tier, C.

    1999-10-01

    In Part 1 the authors formulated and solved a diffusion model for two tandem queues with exponential servers and general renewal arrivals. They thus obtained the easy traffic diffusion approximation to the steady state joint queue length distribution for this network. Here they study asymptotic and numerical properties of the diffusion approximation. In particular, analytical expressions are obtained for the tail probabilities. Both the joint distribution of the two queues and the marginal distribution of the second queue are considered. They also give numerical illustrations of how this marginal is affected by changes in the arrival and service processes.

  18. Combining Probability Distributions of Wind Waves and Sea Level Variations to Assess Return Periods of Coastal Floods

    NASA Astrophysics Data System (ADS)

    Leijala, U.; Bjorkqvist, J. V.; Pellikka, H.; Johansson, M. M.; Kahma, K. K.

    2017-12-01

    Predicting the behaviour of the joint effect of sea level and wind waves is of great significance due to the major impact of flooding events in densely populated coastal regions. As mean sea level rises, the effect of sea level variations accompanied by the waves will be even more harmful in the future. The main challenge when evaluating the effect of waves and sea level variations is that long time series of both variables rarely exist. Wave statistics are also highly location-dependent, thus requiring wave buoy measurements and/or high-resolution wave modelling. As an initial approximation of the joint effect, the variables may be treated as independent random variables, to achieve the probability distribution of their sum. We present results of a case study based on three probability distributions: 1) wave run-up constructed from individual wave buoy measurements, 2) short-term sea level variability based on tide gauge data, and 3) mean sea level projections based on up-to-date regional scenarios. The wave measurements were conducted during 2012-2014 on the coast of city of Helsinki located in the Gulf of Finland in the Baltic Sea. The short-term sea level distribution contains the last 30 years (1986-2015) of hourly data from Helsinki tide gauge, and the mean sea level projections are scenarios adjusted for the Gulf of Finland. Additionally, we present a sensitivity test based on six different theoretical wave height distributions representing different wave behaviour in relation to sea level variations. As these wave distributions are merged with one common sea level distribution, we can study how the different shapes of the wave height distribution affect the distribution of the sum, and which one of the components is dominating under different wave conditions. As an outcome of the method, we obtain a probability distribution of the maximum elevation of the continuous water mass, which enables a flexible tool for evaluating different risk levels in the current and future climate.

  19. Investigation of the relation between the return periods of major drought characteristics using copula functions

    NASA Astrophysics Data System (ADS)

    Hüsami Afşar, Mehdi; Unal Şorman, Ali; Tugrul Yilmaz, Mustafa

    2016-04-01

    Different drought characteristics (e.g. duration, average severity, and average areal extent) often have monotonic relation that increased magnitude of one often follows a similar increase in the magnitude of the other drought characteristic. Hence it is viable to establish a relationship between different drought characteristics with the goal of predicting one using other ones. Copula functions that relate different variables using their joint and conditional cumulative probability distributions are often used to statistically model the drought characteristics. In this study bivariate and trivariate joint probabilities of these characteristics are obtained over Ankara (Turkey) between 1960 and 2013. Copula-based return period estimation of drought characteristics of duration, average severity, and average areal extent show joint probabilities of these characteristics can be satisfactorily achieved. Among different copula families investigated in this study, elliptical family (i.e. including normal and t-student copula functions) resulted in the lowest root mean square error. "This study was supported by TUBITAK fund #114Y676)."

  20. No-signaling quantum key distribution: solution by linear programming

    NASA Astrophysics Data System (ADS)

    Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan

    2015-02-01

    We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.

  1. Biomechanical Tolerance of Calcaneal Fractures

    PubMed Central

    Yoganandan, Narayan; Pintar, Frank A.; Gennarelli, Thomas A.; Seipel, Robert; Marks, Richard

    1999-01-01

    Biomechanical studies have been conducted in the past to understand the mechanisms of injury to the foot-ankle complex. However, statistically based tolerance criteria for calcaneal complex injuries are lacking. Consequently, this research was designed to derive a probability distribution that represents human calcaneal tolerance under impact loading such as those encountered in vehicular collisions. Information for deriving the distribution was obtained by experiments on unembalmed human cadaver lower extremities. Briefly, the protocol included the following. The knee joint was disarticulated such that the entire lower extremity distal to the knee joint remained intact. The proximal tibia was fixed in polymethylmethacrylate. The specimens were aligned and impact loading was applied using mini-sled pendulum equipment. The pendulum impactor dynamically loaded the plantar aspect of the foot once. Following the test, specimens were palpated and radiographs in multiple planes were obtained. Injuries were classified into no fracture, and extra-and intra-articular fractures of the calcaneus. There were 14 cases of no injury and 12 cases of calcaneal fracture. The fracture forces (mean: 7802 N) were significantly different (p<0.01) from the forces in the no injury (mean: 4144 N) group. The probability of calcaneal fracture determined using logistic regression indicated that a force of 6.2 kN corresponds to 50 percent probability of calcaneal fracture. The derived probability distribution is useful in the design of dummies and vehicular surfaces.

  2. A short walk in quantum probability.

    PubMed

    Hudson, Robin

    2018-04-28

    This is a personal survey of aspects of quantum probability related to the Heisenberg commutation relation for canonical pairs. Using the failure, in general, of non-negativity of the Wigner distribution for canonical pairs to motivate a more satisfactory quantum notion of joint distribution, we visit a central limit theorem for such pairs and a resulting family of quantum planar Brownian motions which deform the classical planar Brownian motion, together with a corresponding family of quantum stochastic areas.This article is part of the themed issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  3. Correlation signatures of wet soils and snows. [algorithm development and computer programming

    NASA Technical Reports Server (NTRS)

    Phillips, M. R.

    1972-01-01

    Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.

  4. Generalized monogamy of contextual inequalities from the no-disturbance principle.

    PubMed

    Ramanathan, Ravishankar; Soeda, Akihito; Kurzyński, Paweł; Kaszlikowski, Dagomir

    2012-08-03

    In this Letter, we demonstrate that the property of monogamy of Bell violations seen for no-signaling correlations in composite systems can be generalized to the monogamy of contextuality in single systems obeying the Gleason property of no disturbance. We show how one can construct monogamies for contextual inequalities by using the graph-theoretic technique of vertex decomposition of a graph representing a set of measurements into subgraphs of suitable independence numbers that themselves admit a joint probability distribution. After establishing that all the subgraphs that are chordal graphs admit a joint probability distribution, we formulate a precise graph-theoretic condition that gives rise to the monogamy of contextuality. We also show how such monogamies arise within quantum theory for a single four-dimensional system and interpret violation of these relations in terms of a violation of causality. These monogamies can be tested with current experimental techniques.

  5. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  6. Comparison of Bootstrapping and Markov Chain Monte Carlo for Copula Analysis of Hydrological Droughts

    NASA Astrophysics Data System (ADS)

    Yang, P.; Ng, T. L.; Yang, W.

    2015-12-01

    Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC yield the same level of performance, and for medium sample sizes (~100), bootstrapping is better. For cases with a large sample size (~200), there is little difference between the CIs generated using bootstrapping and MCMC regardless of whether or not an informative prior exists.

  7. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    PubMed

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  8. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events

    PubMed Central

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate. PMID:28066225

  9. Excluding joint probabilities from quantum theory

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Danageozian, Arshag

    2018-03-01

    Quantum theory does not provide a unique definition for the joint probability of two noncommuting observables, which is the next important question after the Born's probability for a single observable. Instead, various definitions were suggested, e.g., via quasiprobabilities or via hidden-variable theories. After reviewing open issues of the joint probability, we relate it to quantum imprecise probabilities, which are noncontextual and are consistent with all constraints expected from a quantum probability. We study two noncommuting observables in a two-dimensional Hilbert space and show that there is no precise joint probability that applies for any quantum state and is consistent with imprecise probabilities. This contrasts with theorems by Bell and Kochen-Specker that exclude joint probabilities for more than two noncommuting observables, in Hilbert space with dimension larger than two. If measurement contexts are included into the definition, joint probabilities are not excluded anymore, but they are still constrained by imprecise probabilities.

  10. Probability distributions for multimeric systems.

    PubMed

    Albert, Jaroslav; Rooman, Marianne

    2016-01-01

    We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.

  11. Estimate of Probability of Crack Detection from Service Difficulty Report Data.

    DOT National Transportation Integrated Search

    1995-09-01

    The initiation and growth of cracks in a fuselage lap joint were simulated. Stochastic distribution of crack initiation and rivet interference were included. The simulation also contained a simplified crack growth. Nominal crack growth behavior of la...

  12. Estimate of probability of crack detection from service difficulty report data

    DOT National Transportation Integrated Search

    1994-09-01

    The initiation and growth of cracks in a fuselage lap joint were simulated. Stochastic distribution of crack initiation and rivet interference were included. The simulation also contained a simplified crack growth. Nominal crack growth behavior of la...

  13. Slant path rain attenuation and path diversity statistics obtained through radar modeling of rain structure

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1984-01-01

    Single and joint terminal slant path attenuation statistics at frequencies of 28.56 and 19.04 GHz have been derived, employing a radar data base obtained over a three-year period at Wallops Island, VA. Statistics were independently obtained for path elevation angles of 20, 45, and 90 deg for purposes of examining how elevation angles influences both single-terminal and joint probability distributions. Both diversity gains and autocorrelation function dependence on site spacing and elevation angles were determined employing the radar modeling results. Comparisons with other investigators are presented. An independent path elevation angle prediction technique was developed and demonstrated to fit well with the radar-derived single and joint terminal radar-derived cumulative fade distributions at various elevation angles.

  14. Economic Statistical Design of Integrated X-bar-S Control Chart with Preventive Maintenance and General Failure Distribution

    PubMed Central

    Caballero Morales, Santiago Omar

    2013-01-01

    The application of Preventive Maintenance (PM) and Statistical Process Control (SPC) are important practices to achieve high product quality, small frequency of failures, and cost reduction in a production process. However there are some points that have not been explored in depth about its joint application. First, most SPC is performed with the X-bar control chart which does not fully consider the variability of the production process. Second, many studies of design of control charts consider just the economic aspect while statistical restrictions must be considered to achieve charts with low probabilities of false detection of failures. Third, the effect of PM on processes with different failure probability distributions has not been studied. Hence, this paper covers these points, presenting the Economic Statistical Design (ESD) of joint X-bar-S control charts with a cost model that integrates PM with general failure distribution. Experiments showed statistically significant reductions in costs when PM is performed on processes with high failure rates and reductions in the sampling frequency of units for testing under SPC. PMID:23527082

  15. Simulation of Stochastic Processes by Coupled ODE-PDE

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2008-01-01

    A document discusses the emergence of randomness in solutions of coupled, fully deterministic ODE-PDE (ordinary differential equations-partial differential equations) due to failure of the Lipschitz condition as a new phenomenon. It is possible to exploit the special properties of ordinary differential equations (represented by an arbitrarily chosen, dynamical system) coupled with the corresponding Liouville equations (used to describe the evolution of initial uncertainties in terms of joint probability distribution) in order to simulate stochastic processes with the proscribed probability distributions. The important advantage of the proposed approach is that the simulation does not require a random-number generator.

  16. Comments on {open_quotes}interpretations of quantum mechanics, joint measurement of incompatible observables, and counterfactual definiteness{close_quotes}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stapp, H.P.

    1994-12-01

    Some seeming logical deficiencies in a recent paper are described. The author responds to the arguments of the work by de Muynck, De Baere, and Martens (MDM), who argue it is widely accepted today that some sort of nonlocal effect is needed to resolve the problems raised by the works of Einstein, Podolsky, and Rosen (EPR) and John Bell. In MBM a variety of arguments are set forth that aim to invalidate the existing purported proofs of nonlocality and to provide, moreover, a local solution to the problems uncovered by EPR and Bell. Much of the argumentation in MBM ismore » based on the idea of introducing `nonideal` measurements, which, according to MBM, allow one to construct joint probability distributions for incompatible observables. The existence of a bona fide joint probability distribution for the incompatible observables occurring in the EPRB experiments would entail that Bell`s inequalities can be satisfied, and hence that the mathematical basis for the nonlocal effects would disappear. This relult would apparently allow one to eliminate the need for nonlocal effects by considering experiments of this new kind.« less

  17. A probabilistic multi-criteria decision making technique for conceptual and preliminary aerospace systems design

    NASA Astrophysics Data System (ADS)

    Bandte, Oliver

    It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.

  18. Maximum aposteriori joint source/channel coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Gibson, Jerry D.

    1991-01-01

    A maximum aposteriori probability (MAP) approach to joint source/channel coder design is presented in this paper. This method attempts to explore a technique for designing joint source/channel codes, rather than ways of distributing bits between source coders and channel coders. For a nonideal source coder, MAP arguments are used to design a decoder which takes advantage of redundancy in the source coder output to perform error correction. Once the decoder is obtained, it is analyzed with the purpose of obtaining 'desirable properties' of the channel input sequence for improving overall system performance. Finally, an encoder design which incorporates these properties is proposed.

  19. A comprehensive model to determine the effects of temperature and species fluctuations on reactions in turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Antaki, P. J.

    1981-01-01

    The joint probability distribution function (pdf), which is a modification of the bivariate Gaussian pdf, is discussed and results are presented for a global reaction model using the joint pdf. An alternative joint pdf is discussed. A criterion which permits the selection of temperature pdf's in different regions of turbulent, reacting flow fields is developed. Two principal approaches to the determination of reaction rates in computer programs containing detailed chemical kinetics are outlined. These models represent a practical solution to the modeling of species reaction rates in turbulent, reacting flows.

  20. On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Black, D. C.

    2001-01-01

    We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.

  1. Regional analysis and derivation of copula-based drought Severity-Area-Frequency curve in Lake Urmia basin, Iran.

    PubMed

    Amirataee, Babak; Montaseri, Majid; Rezaie, Hossein

    2018-01-15

    Droughts are extreme events characterized by temporal duration and spatial large-scale effects. In general, regional droughts are affected by general circulation of the atmosphere (at large-scale) and regional natural factors, including the topography, natural lakes, the position relative to the center and the path of the ocean currents (at small-scale), and they don't cover the exact same effects in a wide area. Therefore, drought Severity-Area-Frequency (S-A-F) curve investigation is an essential task to develop decision making rule for regional drought management. This study developed the copula-based joint probability distribution of drought severity and percent of area under drought across the Lake Urmia basin, Iran. To do this end, one-month Standardized Precipitation Index (SPI) values during the 1971-2013 were applied across 24 rainfall stations in the study area. Then, seven copula functions of various families, including Clayton, Gumbel, Frank, Joe, Galambos, Plackett and Normal copulas, were used to model the joint probability distribution of drought severity and drought area. Using AIC, BIC and RMSE criteria, the Frank copula was selected as the most appropriate copula in order to develop the joint probability distribution of severity-percent of area under drought across the study area. Based on the Frank copula, the drought S-A-F curve for the study area was derived. The results indicated that severe/extreme drought and non-drought (wet) behaviors have affected the majority of study areas (Lake Urmia basin). However, the area covered by the specific semi-drought effects is limited and has been subject to significant variations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Dose-volume histogram prediction using density estimation.

    PubMed

    Skarpman Munter, Johanna; Sjölund, Jens

    2015-09-07

    Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.

  3. The emergence of different tail exponents in the distributions of firm size variables

    NASA Astrophysics Data System (ADS)

    Ishikawa, Atushi; Fujimoto, Shouji; Watanabe, Tsutomu; Mizuno, Takayuki

    2013-05-01

    We discuss a mechanism through which inversion symmetry (i.e., invariance of a joint probability density function under the exchange of variables) and Gibrat’s law generate power-law distributions with different tail exponents. Using a dataset of firm size variables, that is, tangible fixed assets K, the number of workers L, and sales Y, we confirm that these variables have power-law tails with different exponents, and that inversion symmetry and Gibrat’s law hold. Based on these findings, we argue that there exists a plane in the three dimensional space (logK,logL,logY), with respect to which the joint probability density function for the three variables is invariant under the exchange of variables. We provide empirical evidence suggesting that this plane fits the data well, and argue that the plane can be interpreted as the Cobb-Douglas production function, which has been extensively used in various areas of economics since it was first introduced almost a century ago.

  4. Breakdown of the classical description of a local system.

    PubMed

    Kot, Eran; Grønbech-Jensen, Niels; Nielsen, Bo M; Neergaard-Nielsen, Jonas S; Polzik, Eugene S; Sørensen, Anders S

    2012-06-08

    We provide a straightforward demonstration of a fundamental difference between classical and quantum mechanics for a single local system: namely, the absence of a joint probability distribution of the position x and momentum p. Elaborating on a recently reported criterion by Bednorz and Belzig [Phys. Rev. A 83, 052113 (2011)] we derive a simple criterion that must be fulfilled for any joint probability distribution in classical physics. We demonstrate the violation of this criterion using the homodyne measurement of a single photon state, thus proving a straightforward signature of the breakdown of a classical description of the underlying state. Most importantly, the criterion used does not rely on quantum mechanics and can thus be used to demonstrate nonclassicality of systems not immediately apparent to exhibit quantum behavior. The criterion is directly applicable to any system described by the continuous canonical variables x and p, such as a mechanical or an electrical oscillator and a collective spin of a large ensemble.

  5. Bayesian seismic inversion based on rock-physics prior modeling for the joint estimation of acoustic impedance, porosity and lithofacies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio

    We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less

  6. Bayesian data analysis tools for atomic physics

    NASA Astrophysics Data System (ADS)

    Trassinelli, Martino

    2017-10-01

    We present an introduction to some concepts of Bayesian data analysis in the context of atomic physics. Starting from basic rules of probability, we present the Bayes' theorem and its applications. In particular we discuss about how to calculate simple and joint probability distributions and the Bayesian evidence, a model dependent quantity that allows to assign probabilities to different hypotheses from the analysis of a same data set. To give some practical examples, these methods are applied to two concrete cases. In the first example, the presence or not of a satellite line in an atomic spectrum is investigated. In the second example, we determine the most probable model among a set of possible profiles from the analysis of a statistically poor spectrum. We show also how to calculate the probability distribution of the main spectral component without having to determine uniquely the spectrum modeling. For these two studies, we implement the program Nested_fit to calculate the different probability distributions and other related quantities. Nested_fit is a Fortran90/Python code developed during the last years for analysis of atomic spectra. As indicated by the name, it is based on the nested algorithm, which is presented in details together with the program itself.

  7. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    PubMed

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  8. Spacing distribution functions for the one-dimensional point-island model with irreversible attachment

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2011-07-01

    We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.

  9. An analytical approach to gravitational lensing by an ensemble of axisymmetric lenses

    NASA Technical Reports Server (NTRS)

    Lee, Man Hoi; Spergel, David N.

    1990-01-01

    The problem of gravitational lensing by an ensemble of identical axisymmetric lenses randomly distributed on a single lens plane is considered and a formal expression is derived for the joint probability density of finding shear and convergence at a random point on the plane. The amplification probability for a source can be accurately estimated from the distribution in shear and convergence. This method is applied to two cases: lensing by an ensemble of point masses and by an ensemble of objects with Gaussian surface mass density. There is no convergence for point masses whereas shear is negligible for wide Gaussian lenses.

  10. Analysis of vector wind change with respect to time for Cape Kennedy, Florida

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1978-01-01

    Multivariate analysis was used to determine the joint distribution of the four variables represented by the components of the wind vector at an initial time and after a specified elapsed time is hypothesized to be quadravariate normal; the fourteen statistics of this distribution, calculated from 15 years of twice-daily rawinsonde data are presented by monthly reference periods for each month from 0 to 27 km. The hypotheses that the wind component changes with respect to time is univariate normal, that the joint distribution of wind component change with respect to time is univariate normal, that the joint distribution of wind component changes is bivariate normal, and that the modulus of vector wind change is Rayleigh are tested by comparison with observed distributions. Statistics of the conditional bivariate normal distributions of vector wind at a future time given the vector wind at an initial time are derived. Wind changes over time periods from 1 to 5 hours, calculated from Jimsphere data, are presented. Extension of the theoretical prediction (based on rawinsonde data) of wind component change standard deviation to time periods of 1 to 5 hours falls (with a few exceptions) within the 95 percentile confidence band of the population estimate obtained from the Jimsphere sample data. The joint distributions of wind change components, conditional wind components, and 1 km vector wind shear change components are illustrated by probability ellipses at the 95 percentile level.

  11. Application of multivariate Gaussian detection theory to known non-Gaussian probability density functions

    NASA Astrophysics Data System (ADS)

    Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.

    1995-06-01

    A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.

  12. Process, System, Causality, and Quantum Mechanics: A Psychoanalysis of Animal Faith

    NASA Astrophysics Data System (ADS)

    Etter, Tom; Noyes, H. Pierre

    We shall argue in this paper that a central piece of modern physics does not really belong to physics at all but to elementary probability theory. Given a joint probability distribution J on a set of random variables containing x and y, define a link between x and y to be the condition x=y on J. Define the {\\it state} D of a link x=y as the joint probability distribution matrix on x and y without the link. The two core laws of quantum mechanics are the Born probability rule, and the unitary dynamical law whose best known form is the Schrodinger's equation. Von Neumann formulated these two laws in the language of Hilbert space as prob(P) = trace(PD) and D'T = TD respectively, where P is a projection, D and D' are (von Neumann) density matrices, and T is a unitary transformation. We'll see that if we regard link states as density matrices, the algebraic forms of these two core laws occur as completely general theorems about links. When we extend probability theory by allowing cases to count negatively, we find that the Hilbert space framework of quantum mechanics proper emerges from the assumption that all D's are symmetrical in rows and columns. On the other hand, Markovian systems emerge when we assume that one of every linked variable pair has a uniform probability distribution. By representing quantum and Markovian structure in this way, we see clearly both how they differ, and also how they can coexist in natural harmony with each other, as they must in quantum measurement, which we'll examine in some detail. Looking beyond quantum mechanics, we see how both structures have their special places in a much larger continuum of formal systems that we have yet to look for in nature.

  13. Bayesian Estimation of the DINA Model with Gibbs Sampling

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2015-01-01

    A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…

  14. Combined risk assessment of nonstationary monthly water quality based on Markov chain and time-varying copula.

    PubMed

    Shi, Wei; Xia, Jun

    2017-02-01

    Water quality risk management is a global hot research linkage with the sustainable water resource development. Ammonium nitrogen (NH 3 -N) and permanganate index (COD Mn ) as the focus indicators in Huai River Basin, are selected to reveal their joint transition laws based on Markov theory. The time-varying moments model with either time or land cover index as explanatory variables is applied to build the time-varying marginal distributions of water quality time series. Time-varying copula model, which takes the non-stationarity in the marginal distribution and/or the time variation in dependence structure between water quality series into consideration, is constructed to describe a bivariate frequency analysis for NH 3 -N and COD Mn series at the same monitoring gauge. The larger first-order Markov joint transition probability indicates water quality state Class V w , Class IV and Class III will occur easily in the water body of Bengbu Sluice. Both marginal distribution and copula models are nonstationary, and the explanatory variable time yields better performance than land cover index in describing the non-stationarities in the marginal distributions. In modelling the dependence structure changes, time-varying copula has a better fitting performance than the copula with the constant or the time-trend dependence parameter. The largest synchronous encounter risk probability of NH 3 -N and COD Mn simultaneously reaching Class V is 50.61%, while the asynchronous encounter risk probability is largest when NH 3 -N and COD Mn is inferior to class V and class IV water quality standards, respectively.

  15. Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR)

    NASA Astrophysics Data System (ADS)

    Peters, Christina; Malz, Alex; Hlozek, Renée

    2018-01-01

    The Bayesian Estimation Applied to Multiple Species (BEAMS) framework employs probabilistic supernova type classifications to do photometric SN cosmology. This work extends BEAMS to replace high-confidence spectroscopic redshifts with photometric redshift probability density functions, a capability that will be essential in the era the Large Synoptic Survey Telescope and other next-generation photometric surveys where it will not be possible to perform spectroscopic follow up on every SN. We present the Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR) Bayesian hierarchical model for constraining the cosmological parameters from photometric lightcurves and host galaxy photometry, which includes selection effects and is extensible to uncertainty in the redshift-dependent supernova type proportions. We create a pair of realistic mock catalogs of joint posteriors over supernova type, redshift, and distance modulus informed by photometric supernova lightcurves and over redshift from simulated host galaxy photometry. We perform inference under our model to obtain a joint posterior probability distribution over the cosmological parameters and compare our results with other methods, namely: a spectroscopic subset, a subset of high probability photometrically classified supernovae, and reducing the photometric redshift probability to a single measurement and error bar.

  16. Quantum probability assignment limited by relativistic causality.

    PubMed

    Han, Yeong Deok; Choi, Taeseung

    2016-03-14

    Quantum theory has nonlocal correlations, which bothered Einstein, but found to satisfy relativistic causality. Correlation for a shared quantum state manifests itself, in the standard quantum framework, by joint probability distributions that can be obtained by applying state reduction and probability assignment that is called Born rule. Quantum correlations, which show nonlocality when the shared state has an entanglement, can be changed if we apply different probability assignment rule. As a result, the amount of nonlocality in quantum correlation will be changed. The issue is whether the change of the rule of quantum probability assignment breaks relativistic causality. We have shown that Born rule on quantum measurement is derived by requiring relativistic causality condition. This shows how the relativistic causality limits the upper bound of quantum nonlocality through quantum probability assignment.

  17. LES/PDF studies of joint statistics of mixture fraction and progress variable in piloted methane jet flames with inhomogeneous inlet flows

    NASA Astrophysics Data System (ADS)

    Zhang, Pei; Barlow, Robert; Masri, Assaad; Wang, Haifeng

    2016-11-01

    The mixture fraction and progress variable are often used as independent variables for describing turbulent premixed and non-premixed flames. There is a growing interest in using these two variables for describing partially premixed flames. The joint statistical distribution of the mixture fraction and progress variable is of great interest in developing models for partially premixed flames. In this work, we conduct predictive studies of the joint statistics of mixture fraction and progress variable in a series of piloted methane jet flames with inhomogeneous inlet flows. The employed models combine large eddy simulations with the Monte Carlo probability density function (PDF) method. The joint PDFs and marginal PDFs are examined in detail by comparing the model predictions and the measurements. Different presumed shapes of the joint PDFs are also evaluated.

  18. Application of Archimedean copulas to the analysis of drought decadal variation in China

    NASA Astrophysics Data System (ADS)

    Zuo, Dongdong; Feng, Guolin; Zhang, Zengping; Hou, Wei

    2017-12-01

    Based on daily precipitation data collected from 1171 stations in China during 1961-2015, the monthly standardized precipitation index was derived and used to extract two major drought characteristics which are drought duration and severity. Next, a bivariate joint model was established based on the marginal distributions of the two variables and Archimedean copula functions. The joint probability and return period were calculated to analyze the drought characteristics and decadal variation. According to the fit analysis, the Gumbel-Hougaard copula provided the best fit to the observed data. Based on four drought duration classifications and four severity classifications, the drought events were divided into 16 drought types according to the different combinations of duration and severity classifications, and the probability and return period were analyzed for different drought types. The results showed that the occurring probability of six common drought types (0 < D ≤ 1 and 0.5 < S ≤ 1, 1 < D ≤ 3 and 0.5 < S ≤ 1, 1 < D ≤ 3 and 1 < S ≤ 1.5, 1 < D ≤ 3 and 1.5 < S ≤ 2, 1 < D ≤ 3 and 2 < S, and 3 < D ≤ 6 and 2 < S) accounted for 76% of the total probability of all types. Moreover, due to their greater variation, two drought types were particularly notable, i.e., the drought types where D ≥ 6 and S ≥ 2. Analyzing the joint probability in different decades indicated that the location of the drought center had a distinctive stage feature, which cycled from north to northeast to southwest during 1961-2015. However, southwest, north, and northeast China had a higher drought risk. In addition, the drought situation in southwest China should be noted because the joint probability values, return period, and the analysis of trends in the drought duration and severity all indicated a considerable risk in recent years.

  19. Non-Kolmogorovian Approach to the Context-Dependent Systems Breaking the Classical Probability Law

    NASA Astrophysics Data System (ADS)

    Asano, Masanari; Basieva, Irina; Khrennikov, Andrei; Ohya, Masanori; Yamato, Ichiro

    2013-07-01

    There exist several phenomena breaking the classical probability laws. The systems related to such phenomena are context-dependent, so that they are adaptive to other systems. In this paper, we present a new mathematical formalism to compute the joint probability distribution for two event-systems by using concepts of the adaptive dynamics and quantum information theory, e.g., quantum channels and liftings. In physics the basic example of the context-dependent phenomena is the famous double-slit experiment. Recently similar examples have been found in biological and psychological sciences. Our approach is an extension of traditional quantum probability theory, and it is general enough to describe aforementioned contextual phenomena outside of quantum physics.

  20. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  1. A mass reconstruction technique for a heavy resonance decaying to τ + τ -

    NASA Astrophysics Data System (ADS)

    Xia, Li-Gang

    2016-11-01

    For a resonance decaying to τ + τ -, it is difficult to reconstruct its mass accurately because of the presence of neutrinos in the decay products of the τ leptons. If the resonance is heavy enough, we show that its mass can be well determined by the momentum component of the τ decay products perpendicular to the velocity of the τ lepton, p ⊥, and the mass of the visible/invisible decay products, m vis/inv, for τ decaying to hadrons/leptons. By sampling all kinematically allowed values of p ⊥ and m vis/inv according to their joint probability distributions determined by the MC simulations, the mass of the mother resonance is assumed to lie at the position with the maximal probability. Since p ⊥ and m vis/inv are invariant under the boost in the τ lepton direction, the joint probability distributions are independent upon the τ’s origin. Thus this technique is able to determine the mass of an unknown resonance with no efficiency loss. It is tested using MC simulations of the physics processes pp → Z/h(125)/h(750) + X → ττ + X at 13 TeV. The ratio of the full width at half maximum and the peak value of the reconstructed mass distribution is found to be 20%-40% using the information of missing transverse energy. Supported by General Financial Grant from the China Postdoctoral Science Foundation (2015M581062)

  2. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    PubMed Central

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  3. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses.

    PubMed

    Fuller, Robert William; Wong, Tony E; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections.

  4. Contextuality in canonical systems of random variables

    NASA Astrophysics Data System (ADS)

    Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.

    2017-10-01

    Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  5. Product plots.

    PubMed

    Wickham, Hadley; Hofmann, Heike

    2011-12-01

    We propose a new framework for visualising tables of counts, proportions and probabilities. We call our framework product plots, alluding to the computation of area as a product of height and width, and the statistical concept of generating a joint distribution from the product of conditional and marginal distributions. The framework, with extensions, is sufficient to encompass over 20 visualisations previously described in fields of statistical graphics and infovis, including bar charts, mosaic plots, treemaps, equal area plots and fluctuation diagrams. © 2011 IEEE

  6. Applications of the first digit law to measure correlations.

    PubMed

    Gramm, R; Yost, J; Su, Q; Grobe, R

    2017-04-01

    The quasiempirical Benford law predicts that the distribution of the first significant digit of random numbers obtained from mixed probability distributions is surprisingly meaningful and reveals some universal behavior. We generalize this finding to examine the joint first-digit probability of a pair of two random numbers and show that undetectable correlations by means of the usual covariance-based measure can be identified in the statistics of the corresponding first digits. We illustrate this new measure by analyzing the correlations and anticorrelations of the positions of two interacting particles in their quantum mechanical ground state. This suggests that by using this measure, the presence or absence of correlations can be determined even if only the first digit of noisy experimental data can be measured accurately.

  7. Statistical characteristics of the sequential detection of signals in correlated noise

    NASA Astrophysics Data System (ADS)

    Averochkin, V. A.; Baranov, P. E.

    1985-10-01

    A solution is given to the problem of determining the distribution of the duration of the sequential two-threshold Wald rule for the time-discrete detection of determinate and Gaussian correlated signals on a background of Gaussian correlated noise. Expressions are obtained for the joint probability densities of the likelihood ratio logarithms, and an analysis is made of the effect of correlation and SNR on the duration distribution and the detection efficiency. Comparison is made with Neumann-Pearson detection.

  8. Extreme rainfall events: Learning from raingauge time series

    NASA Astrophysics Data System (ADS)

    Boni, G.; Parodi, A.; Rudari, R.

    2006-08-01

    SummaryThis study analyzes the historical records of annual rainfall maxima recorded in Northern Italy, cumulated over time windows (durations) of 1 and 24 h and considered paradigmatic descriptions of storms of both short and long duration. Three large areas are studied: Liguria, Piedmont and Triveneto (Triveneto includes the Regions of Veneto, Trentino Alto Adige and Friuli Venezia Giulia). A regional frequency analysis of annual rainfall maxima is carried out through the Two Components Extreme Value (TCEV) distribution. A hierarchical approach is used to define statistically homogeneous areas so that the definition of a regional distribution becomes possible. Thanks to the peculiar nature of the TCEV distribution, a frequency-based threshold criterion is proposed. Such criterion allows to distinguish the observed ordinary values from the observed extra-ordinary values of annual rainfall maxima. A second step of this study focuses on the analysis of the probability of occurrence of extra-ordinary events over a period of one year. Results show the existence of a four month dominant season that maximizes the number of occurrences of annual rainfall maxima. Such results also show how the seasonality of extra-ordinary events changes whenever a different duration of events is considered. The joint probability of occurrence of extreme storms of short and long duration is also analyzed. Such analysis demonstrates how the joint probability of occurrence significantly changes when all rainfall maxima or only extra-ordinary maxima are used. All results undergo a critical discussion. Such discussion seems to lead to the point that the identified statistical characteristics might represent the landmark of those mechanisms causing heavy precipitation in the analyzed regions.

  9. The Failure Models of Lead Free Sn-3.0Ag-0.5Cu Solder Joint Reliability Under Low-G and High-G Drop Impact

    NASA Astrophysics Data System (ADS)

    Gu, Jian; Lei, YongPing; Lin, Jian; Fu, HanGuang; Wu, Zhongwei

    2017-02-01

    The reliability of Sn-3.0Ag-0.5Cu (SAC 305) solder joint under a broad level of drop impacts was studied. The failure performance of solder joint, failure probability and failure position were analyzed under two shock test conditions, i.e., 1000 g for 1 ms and 300 g for 2 ms. The stress distribution on the solder joint was calculated by ABAQUS. The results revealed that the dominant reason was the tension due to the difference in stiffness between the print circuit board and ball grid array, and the maximum tension of 121.1 MPa and 31.1 MPa, respectively, under both 1000 g or 300 g drop impact, was focused on the corner of the solder joint which was located in the outmost corner of the solder ball row. The failure modes were summarized into the following four modes: initiation and propagation through the (1) intermetallic compound layer, (2) Ni layer, (3) Cu pad, or (4) Sn-matrix. The outmost corner of the solder ball row had a high failure probability under both 1000 g and 300 g drop impact. The number of failures of solder ball under the 300 g drop impact was higher than that under the 1000 g drop impact. The characteristic drop values for failure were 41 and 15,199, respectively, following the statistics.

  10. Bayesian bivariate meta-analysis of correlated effects: Impact of the prior distributions on the between-study correlation, borrowing of strength, and joint inferences

    PubMed Central

    Bujkiewicz, Sylwia; Riley, Richard D

    2016-01-01

    Multivariate random-effects meta-analysis allows the joint synthesis of correlated results from multiple studies, for example, for multiple outcomes or multiple treatment groups. In a Bayesian univariate meta-analysis of one endpoint, the importance of specifying a sensible prior distribution for the between-study variance is well understood. However, in multivariate meta-analysis, there is little guidance about the choice of prior distributions for the variances or, crucially, the between-study correlation, ρB; for the latter, researchers often use a Uniform(−1,1) distribution assuming it is vague. In this paper, an extensive simulation study and a real illustrative example is used to examine the impact of various (realistically) vague prior distributions for ρB and the between-study variances within a Bayesian bivariate random-effects meta-analysis of two correlated treatment effects. A range of diverse scenarios are considered, including complete and missing data, to examine the impact of the prior distributions on posterior results (for treatment effect and between-study correlation), amount of borrowing of strength, and joint predictive distributions of treatment effectiveness in new studies. Two key recommendations are identified to improve the robustness of multivariate meta-analysis results. First, the routine use of a Uniform(−1,1) prior distribution for ρB should be avoided, if possible, as it is not necessarily vague. Instead, researchers should identify a sensible prior distribution, for example, by restricting values to be positive or negative as indicated by prior knowledge. Second, it remains critical to use sensible (e.g. empirically based) prior distributions for the between-study variances, as an inappropriate choice can adversely impact the posterior distribution for ρB, which may then adversely affect inferences such as joint predictive probabilities. These recommendations are especially important with a small number of studies and missing data. PMID:26988929

  11. A Multivariate and Probabilistic Assessment of Drought in the Pacific Northwest under Observed and Future Climate.

    NASA Astrophysics Data System (ADS)

    Mortuza, M. R.; Demissie, Y. K.

    2015-12-01

    In lieu with the recent and anticipated more server and frequently droughts incidences in Yakima River Basin (YRB), a reliable and comprehensive drought assessment is deemed necessary to avoid major crop production loss and better manage the water right issues in the region during low precipitation and/or snow accumulation years. In this study, we have conducted frequency analysis of hydrological droughts and quantified associated uncertainty in the YRB under both historical and changing climate. Streamflow drought index (SDI) was employed to identify mutually correlated drought characteristics (e.g., severity, duration and peak). The historical and future characteristics of drought were estimated by applying tri-variate copulas probability distribution, which effectively describe the joint distribution and dependence of drought severity, duration, and peak. The associated prediction uncertainty, related to parameters of the joint probability and climate projections, were evaluated using the Bayesian approach with bootstrap resampling. For the climate change scenarios, two future representative pathways (RCP4.5 and RCP8.5) from University of Idaho's Multivariate Adaptive Constructed Analogs (MACA) database were considered. The results from the study are expected to provide useful information towards drought risk management in YRB under anticipated climate changes.

  12. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses

    PubMed Central

    Wong, Tony E.; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections. PMID:29287095

  13. Joint distribution of temperature and precipitation in the Mediterranean, using the Copula method

    NASA Astrophysics Data System (ADS)

    Lazoglou, Georgia; Anagnostopoulou, Christina

    2018-03-01

    This study analyses the temperature and precipitation dependence among stations in the Mediterranean. The first station group is located in the eastern Mediterranean (EM) and includes two stations, Athens and Thessaloniki, while the western (WM) one includes Malaga and Barcelona. The data was organized in two time periods, the hot-dry period and the cold-wet one, composed of 5 months, respectively. The analysis is based on a new statistical technique in climatology: the Copula method. Firstly, the calculation of the Kendall tau correlation index showed that temperatures among stations are dependant during both time periods whereas precipitation presents dependency only between the stations located in EM or WM and only during the cold-wet period. Accordingly, the marginal distributions were calculated for each studied station, as they are further used by the copula method. Finally, several copula families, both Archimedean and Elliptical, were tested in order to choose the most appropriate one to model the relation of the studied data sets. Consequently, this study achieves to model the dependence of the main climate parameters (temperature and precipitation) with the Copula method. The Frank copula was identified as the best family to describe the joint distribution of temperature, for the majority of station groups. For precipitation, the best copula families are BB1 and Survival Gumbel. Using the probability distribution diagrams, the probability of a combination of temperature and precipitation values between stations is estimated.

  14. Temporal and spatial characteristics of extreme precipitation events in the Midwest of Jilin Province based on multifractal detrended fluctuation analysis method and copula functions

    NASA Astrophysics Data System (ADS)

    Guo, Enliang; Zhang, Jiquan; Si, Ha; Dong, Zhenhua; Cao, Tiehua; Lan, Wu

    2017-10-01

    Environmental changes have brought about significant changes and challenges to water resources and management in the world; these include increasing climate variability, land use change, intensive agriculture, and rapid urbanization and industrial development, especially much more frequency extreme precipitation events. All of which greatly affect water resource and the development of social economy. In this study, we take extreme precipitation events in the Midwest of Jilin Province as an example; daily precipitation data during 1960-2014 are used. The threshold of extreme precipitation events is defined by multifractal detrended fluctuation analysis (MF-DFA) method. Extreme precipitation (EP), extreme precipitation ratio (EPR), and intensity of extreme precipitation (EPI) are selected as the extreme precipitation indicators, and then the Kolmogorov-Smirnov (K-S) test is employed to determine the optimal probability distribution function of extreme precipitation indicators. On this basis, copulas connect nonparametric estimation method and the Akaike Information Criterion (AIC) method is adopted to determine the bivariate copula function. Finally, we analyze the characteristics of single variable extremum and bivariate joint probability distribution of the extreme precipitation events. The results show that the threshold of extreme precipitation events in semi-arid areas is far less than that in subhumid areas. The extreme precipitation frequency shows a significant decline while the extreme precipitation intensity shows a trend of growth; there are significant differences in spatiotemporal of extreme precipitation events. The spatial variation trend of the joint return period gets shorter from the west to the east. The spatial distribution of co-occurrence return period takes on contrary changes and it is longer than the joint return period.

  15. Current-wave spectra coupling project. Volume III. Cumulative distribution of forces on structures subjected to the combined action of currents and random waves for potential OTEC sites: (A) Keahole Point, Hawaii, 100 year hurricane; (B) Punta Tuna, Puerto Rico, 100 year hurricane; (C) New Orleans, Louisiana, 100 year hurricane; (D) West Coast of Florida, 100 year hurricane. [CUFOR code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venezian, G.; Bretschneider, C.L.

    1980-08-01

    This volume details a new methodology to analyze statistically the forces experienced by a structure at sea. Conventionally a wave climate is defined using a spectral function. The wave climate is described using a joint distribution of wave heights and periods (wave lengths), characterizing actual sea conditions through some measured or estimated parameters like the significant wave height, maximum spectral density, etc. Random wave heights and periods satisfying the joint distribution are then generated. Wave kinetics are obtained using linear or non-linear theory. In the case of currents a linear wave-current interaction theory of Venezian (1979) is used. The peakmore » force experienced by the structure for each individual wave is identified. Finally, the probability of exceedance of any given peak force on the structure may be obtained. A three-parameter Longuet-Higgins type joint distribution of wave heights and periods is discussed in detail. This joint distribution was used to model sea conditions at four potential OTEC locations. A uniform cylindrical pipe of 3 m diameter, extending to a depth of 550 m was used as a sample structure. Wave-current interactions were included and forces computed using Morison's equation. The drag and virtual mass coefficients were interpolated from published data. A Fortran program CUFOR was written to execute the above procedure. Tabulated and graphic results of peak forces experienced by the structure, for each location, are presented. A listing of CUFOR is included. Considerable flexibility of structural definition has been incorporated. The program can easily be modified in the case of an alternative joint distribution or for inclusion of effects like non-linearity of waves, transverse forces and diffraction.« less

  16. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  17. An estimation method of the direct benefit of a waterlogging control project applicable to the changing environment

    NASA Astrophysics Data System (ADS)

    Zengmei, L.; Guanghua, Q.; Zishen, C.

    2015-05-01

    The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The results show that the waterlogging control benefit estimation model constructed is applicable to the changing conditions that occur in both the disaster-inducing environment of the waterlogging-prone zone and disaster-bearing bodies, considering all conditions when rainstorms of all frequencies meet different water levels in the drainage-accepting zone. Thus, the estimation method of waterlogging control benefit can reflect the actual situation more objectively, and offer a scientific basis for rational decision-making for waterlogging control projects.

  18. Finding Bounded Rational Equilibria. Part 1; Iterative Focusing

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2004-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality characterizing all real-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. It has recently been shown that the same information theoretic mathematical structure, known as Probability Collectives (PC) underlies both issues. This relationship between statistical physics and game theory allows techniques and insights from the one field to be applied to the other. In particular, PC provides a formal model-independent definition of the degree of rationality of a player and of bounded rationality equilibria. This pair of papers extends previous work on PC by introducing new computational approaches to effectively find bounded rationality equilibria of common-interest (team) games.

  19. Stochastic transfer of polarized radiation in finite cloudy atmospheric media with reflective boundaries

    NASA Astrophysics Data System (ADS)

    Sallah, M.

    2014-03-01

    The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.

  20. The Efficacy of Using Diagrams When Solving Probability Word Problems in College

    ERIC Educational Resources Information Center

    Beitzel, Brian D.; Staley, Richard K.

    2015-01-01

    Previous experiments have shown a deleterious effect of visual representations on college students' ability to solve total- and joint-probability word problems. The present experiments used conditional-probability problems, known to be more difficult than total- and joint-probability problems. The diagram group was instructed in how to use tree…

  1. Parkinson Disease Detection from Speech Articulation Neuromechanics.

    PubMed

    Gómez-Vilda, Pedro; Mekyska, Jiri; Ferrández, José M; Palacios-Alonso, Daniel; Gómez-Rodellar, Andrés; Rodellar-Biarge, Victoria; Galaz, Zoltan; Smekal, Zdenek; Eliasova, Ilona; Kostalova, Milena; Rektorova, Irena

    2017-01-01

    Aim: The research described is intended to give a description of articulation dynamics as a correlate of the kinematic behavior of the jaw-tongue biomechanical system, encoded as a probability distribution of an absolute joint velocity. This distribution may be used in detecting and grading speech from patients affected by neurodegenerative illnesses, as Parkinson Disease. Hypothesis: The work hypothesis is that the probability density function of the absolute joint velocity includes information on the stability of phonation when applied to sustained vowels, as well as on fluency if applied to connected speech. Methods: A dataset of sustained vowels recorded from Parkinson Disease patients is contrasted with similar recordings from normative subjects. The probability distribution of the absolute kinematic velocity of the jaw-tongue system is extracted from each utterance. A Random Least Squares Feed-Forward Network (RLSFN) has been used as a binary classifier working on the pathological and normative datasets in a leave-one-out strategy. Monte Carlo simulations have been conducted to estimate the influence of the stochastic nature of the classifier. Two datasets for each gender were tested (males and females) including 26 normative and 53 pathological subjects in the male set, and 25 normative and 38 pathological in the female set. Results: Male and female data subsets were tested in single runs, yielding equal error rates under 0.6% (Accuracy over 99.4%). Due to the stochastic nature of each experiment, Monte Carlo runs were conducted to test the reliability of the methodology. The average detection results after 200 Montecarlo runs of a 200 hyperplane hidden layer RLSFN are given in terms of Sensitivity (males: 0.9946, females: 0.9942), Specificity (males: 0.9944, females: 0.9941) and Accuracy (males: 0.9945, females: 0.9942). The area under the ROC curve is 0.9947 (males) and 0.9945 (females). The equal error rate is 0.0054 (males) and 0.0057 (females). Conclusions: The proposed methodology avails that the use of highly normalized descriptors as the probability distribution of kinematic variables of vowel articulation stability, which has some interesting properties in terms of information theory, boosts the potential of simple yet powerful classifiers in producing quite acceptable detection results in Parkinson Disease.

  2. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  3. An information diffusion technique to assess integrated hazard risks.

    PubMed

    Huang, Chongfu; Huang, Yundong

    2018-02-01

    An integrated risk is a scene in the future associated with some adverse incident caused by multiple hazards. An integrated probability risk is the expected value of disaster. Due to the difficulty of assessing an integrated probability risk with a small sample, weighting methods and copulas are employed to avoid this obstacle. To resolve the problem, in this paper, we develop the information diffusion technique to construct a joint probability distribution and a vulnerability surface. Then, an integrated risk can be directly assessed by using a small sample. A case of an integrated risk caused by flood and earthquake is given to show how the suggested technique is used to assess the integrated risk of annual property loss. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Manifold Matching: Joint Optimization of Fidelity and Commensurability

    DTIC Science & Technology

    2011-11-12

    identified separately in p◦m, will be geometrically incommensurate (see Figure 7). Thus the null distribution of the test statistic will be inflated...into the objective function obviates the geometric incommensurability phenomenon. Thus we can es- tablish that, for a range of Dirichlet product model...from the geometric incommensu- rability phenomenon. Then q p implies that cca suffers from the spurious correlation phe- nomenon with high probability

  5. Generation of multivariate near shore extreme wave conditions based on an extreme value copula for offshore boundary conditions.

    NASA Astrophysics Data System (ADS)

    Leyssen, Gert; Mercelis, Peter; De Schoesitter, Philippe; Blanckaert, Joris

    2013-04-01

    Near shore extreme wave conditions, used as input for numerical wave agitation simulations and for the dimensioning of coastal defense structures, need to be determined at a harbour entrance situated at the French North Sea coast. To obtain significant wave heights, the numerical wave model SWAN has been used. A multivariate approach was used to account for the joint probabilities. Considered variables are: wind velocity and direction, water level and significant offshore wave height and wave period. In a first step a univariate extreme value distribution has been determined for the main variables. By means of a technique based on the mean excess function, an appropriate member of the GPD is selected. An optimal threshold for peak over threshold selection is determined by maximum likelihood optimization. Next, the joint dependency structure for the primary random variables is modeled by an extreme value copula. Eventually the multivariate domain of variables was stratified in different classes, each of which representing a combination of variable quantiles with a joint probability, which are used for model simulation. The main variable is the wind velocity, as in the area of concern extreme wave conditions are wind driven. The analysis is repeated for 9 different wind directions. The secondary variable is water level. In shallow waters extreme waves will be directly affected by water depth. Hence the joint probability of occurrence for water level and wave height is of major importance for design of coastal defense structures. Wind velocity and water levels are only dependent for some wind directions (wind induced setup). Dependent directions are detected using a Kendall and Spearman test and appeared to be those with the longest fetch. For these directions, wind velocity and water level extreme value distributions are multivariately linked through a Gumbel Copula. These distributions are stratified into classes of which the frequency of occurrence can be calculated. For the remaining directions the univariate extreme wind velocity distribution is stratified, each class combined with 5 high water levels. The wave height at the model boundaries was taken into account by a regression with the extreme wind velocity at the offshore location. The regression line and the 95% confidence limits where combined with each class. Eventually the wave period is computed by a new regression with the significant wave height. This way 1103 synthetic events were selected and simulated with the SWAN wave model, each of which a frequency of occurrence is calculated for. Hence near shore significant wave heights are obtained with corresponding frequencies. The statistical distribution of the near shore wave heights is determined by sorting the model results in a descending order and accumulating the corresponding frequencies. This approach allows determination of conditional return periods. For example, for the imposed univariate design return periods of 100 years for significant wave height and 30 years for water level, the joint return period for a simultaneous exceedance of both conditions can be computed as 4000 years. Hence, this methodology allows for a probabilistic design of coastal defense structures.

  6. Information Theory - The Bridge Connecting Bounded Rational Game Theory and Statistical Physics

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2005-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality of all red-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. This paper shows that the same information theoretic mathematical structure, known as Product Distribution (PD) theory, addresses both issues. In this, PD theory not only provides a principle formulation of bounded rationality and a set of new types of mean field theory in statistical physics; it also shows that those topics are fundamentally one and the same.

  7. Historical and future drought in Bangladesh using copula-based bivariate regional frequency analysis

    NASA Astrophysics Data System (ADS)

    Mortuza, Md Rubayet; Moges, Edom; Demissie, Yonas; Li, Hong-Yi

    2018-02-01

    The study aims at regional and probabilistic evaluation of bivariate drought characteristics to assess both the past and future drought duration and severity in Bangladesh. The procedures involve applying (1) standardized precipitation index to identify drought duration and severity, (2) regional frequency analysis to determine the appropriate marginal distributions for both duration and severity, (3) copula model to estimate the joint probability distribution of drought duration and severity, and (4) precipitation projections from multiple climate models to assess future drought trends. Since drought duration and severity in Bangladesh are often strongly correlated and do not follow same marginal distributions, the joint and conditional return periods of droughts are characterized using the copula-based joint distribution. The country is divided into three homogeneous regions using Fuzzy clustering and multivariate discordancy and homogeneity measures. For given severity and duration values, the joint return periods for a drought to exceed both values are on average 45% larger, while to exceed either value are 40% less than the return periods from the univariate frequency analysis, which treats drought duration and severity independently. These suggest that compared to the bivariate drought frequency analysis, the standard univariate frequency analysis under/overestimate the frequency and severity of droughts depending on how their duration and severity are related. Overall, more frequent and severe droughts are observed in the west side of the country. Future drought trend based on four climate models and two scenarios showed the possibility of less frequent drought in the future (2020-2100) than in the past (1961-2010).

  8. Comment on "constructing quantum games from nonfactorizable joint probabilities".

    PubMed

    Frąckiewicz, Piotr

    2013-09-01

    In the paper [Phys. Rev. E 76, 061122 (2007)], the authors presented a way of playing 2 × 2 games so that players were able to exploit nonfactorizable joint probabilities respecting the nonsignaling principle (i.e., relativistic causality). We are going to prove, however, that the scheme does not generalize the games studied in the commented paper. Moreover, it allows the players to obtain nonclassical results even if the factorizable joint probabilities are used.

  9. Finding Bounded Rational Equilibria. Part 2; Alternative Lagrangians and Uncountable Move Spaces

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2004-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality characterizing all real-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. It has recently been shown that the same information theoretic mathematical structure, known as Probability Collectives (PC) underlies both issues. This relationship between statistical physics and game theory allows techniques and insights &om the one field to be applied to the other. In particular, PC provides a formal model-independent definition of the degree of rationality of a player and of bounded rationality equilibria. This pair of papers extends previous work on PC by introducing new computational approaches to effectively find bounded rationality equilibria of common-interest (team) games.

  10. Spacing distribution functions for 1D point island model with irreversible attachment

    NASA Astrophysics Data System (ADS)

    Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto

    2011-03-01

    We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).

  11. Integrated drought risk assessment of multi-hazard-affected bodies based on copulas in the Taoerhe Basin, China

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Zhang, Jiquan; Guo, Enliang; Alu, Si; Li, Danjun; Ha, Si; Dong, Zhenhua

    2018-02-01

    Along with global warming, drought disasters are occurring more frequently and are seriously affecting normal life and food security in China. Drought risk assessments are necessary to provide support for local governments. This study aimed to establish an integrated drought risk model based on the relation curve of drought joint probabilities and drought losses of multi-hazard-affected bodies. First, drought characteristics, including duration and severity, were classified using the 1953-2010 precipitation anomaly in the Taoerhe Basin based on run theory, and their marginal distributions were identified by exponential and Gamma distributions, respectively. Then, drought duration and severity were related to construct a joint probability distribution based on the copula function. We used the EPIC (Environmental Policy Integrated Climate) model to simulate maize yield and historical data to calculate the loss rates of agriculture, industry, and animal husbandry in the study area. Next, we constructed vulnerability curves. Finally, the spatial distributions of drought risk for 10-, 20-, and 50-year return periods were expressed using inverse distance weighting. Our results indicate that the spatial distributions of the three return periods are consistent. The highest drought risk is in Ulanhot, and the duration and severity there were both highest. This means that higher drought risk corresponds to longer drought duration and larger drought severity, thus providing useful information for drought and water resource management. For 10-, 20-, and 50-year return periods, the drought risk values ranged from 0.41 to 0.53, 0.45 to 0.59, and 0.50 to 0.67, respectively. Therefore, when the return period increases, the drought risk increases.

  12. Modeling the Dependency Structure of Integrated Intensity Processes

    PubMed Central

    Ma, Yong-Ki

    2015-01-01

    This paper studies an important issue of dependence structure. To model this structure, the intensities within the Cox processes are driven by dependent shot noise processes, where jumps occur simultaneously and their sizes are correlated. The joint survival probability of the integrated intensities is explicitly obtained from the copula with exponential marginal distributions. Subsequently, this result can provide a very useful guide for credit risk management. PMID:26270638

  13. Dynamic Uncertain Causality Graph for Knowledge Representation and Probabilistic Reasoning: Directed Cyclic Graph and Joint Probability Distribution.

    PubMed

    Zhang, Qin

    2015-07-01

    Probabilistic graphical models (PGMs) such as Bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning. Dynamic uncertain causality graph (DUCG) is a newly presented model of PGMs, which can be applied to fault diagnosis of large and complex industrial systems, disease diagnosis, and so on. The basic methodology of DUCG has been previously presented, in which only the directed acyclic graph (DAG) was addressed. However, the mathematical meaning of DUCG was not discussed. In this paper, the DUCG with directed cyclic graphs (DCGs) is addressed. In contrast, BN does not allow DCGs, as otherwise the conditional independence will not be satisfied. The inference algorithm for the DUCG with DCGs is presented, which not only extends the capabilities of DUCG from DAGs to DCGs but also enables users to decompose a large and complex DUCG into a set of small, simple sub-DUCGs, so that a large and complex knowledge base can be easily constructed, understood, and maintained. The basic mathematical definition of a complete DUCG with or without DCGs is proved to be a joint probability distribution (JPD) over a set of random variables. The incomplete DUCG as a part of a complete DUCG may represent a part of JPD. Examples are provided to illustrate the methodology.

  14. A pitfall of piecewise-polytropic equation of state inference

    NASA Astrophysics Data System (ADS)

    Raaijmakers, Geert; Riley, Thomas E.; Watts, Anna L.

    2018-05-01

    The only messenger radiation in the Universe which one can use to statistically probe the Equation of State (EOS) of cold dense matter is that originating from the near-field vicinities of compact stars. Constraining gravitational masses and equatorial radii of rotating compact stars is a major goal for current and future telescope missions, with a primary purpose of constraining the EOS. From a Bayesian perspective it is necessary to carefully discuss prior definition; in this context a complicating issue is that in practice there exist pathologies in the general relativistic mapping between spaces of local (interior source matter) and global (exterior spacetime) parameters. In a companion paper, these issues were raised on a theoretical basis. In this study we reproduce a probability transformation procedure from the literature in order to map a joint posterior distribution of Schwarzschild gravitational masses and radii into a joint posterior distribution of EOS parameters. We demonstrate computationally that EOS parameter inferences are sensitive to the choice to define a prior on a joint space of these masses and radii, instead of on a joint space interior source matter parameters. We focus on the piecewise-polytropic EOS model, which is currently standard in the field of astrophysical dense matter study. We discuss the implications of this issue for the field.

  15. Probability distribution of haplotype frequencies under the two-locus Wright-Fisher model by diffusion approximation.

    PubMed

    Boitard, Simon; Loisel, Patrice

    2007-05-01

    The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.

  16. A case cluster of variant Creutzfeldt-Jakob disease linked to the Kingdom of Saudi Arabia.

    PubMed

    Coulthart, Michael B; Geschwind, Michael D; Qureshi, Shireen; Phielipp, Nicolas; Demarsh, Alex; Abrams, Joseph Y; Belay, Ermias; Gambetti, Pierluigi; Jansen, Gerard H; Lang, Anthony E; Schonberger, Lawrence B

    2016-10-01

    As of mid-2016, 231 cases of variant Creutzfeldt-Jakob disease-the human form of a prion disease of cattle, bovine spongiform encephalopathy-have been reported from 12 countries. With few exceptions, the affected individuals had histories of extended residence in the UK or other Western European countries during the period (1980-96) of maximum global risk for human exposure to bovine spongiform encephalopathy. However, the possibility remains that other geographic foci of human infection exist, identification of which may help to foreshadow the future of the epidemic. We report results of a quantitative analysis of country-specific relative risks of infection for three individuals diagnosed with variant Creutzfeldt-Jakob disease in the USA and Canada. All were born and raised in Saudi Arabia, but had histories of residence and travel in other countries. To calculate country-specific relative probabilities of infection, we aligned each patient's life history with published estimates of probability distributions of incubation period and age at infection parameters from a UK cohort of 171 variant Creutzfeldt-Jakob disease cases. The distributions were then partitioned into probability density fractions according to time intervals of the patient's residence and travel history, and the density fractions were combined by country. This calculation was performed for incubation period alone, age at infection alone, and jointly for incubation and age at infection. Country-specific fractions were normalized either to the total density between the individual's dates of birth and symptom onset ('lifetime'), or to that between 1980 and 1996, for a total of six combinations of parameter and interval. The country-specific relative probability of infection for Saudi Arabia clearly ranked highest under each of the six combinations of parameter × interval for Patients 1 and 2, with values ranging from 0.572 to 0.998, respectively, for Patient 2 (age at infection × lifetime) and Patient 1 (joint incubation and age at infection × 1980-96). For Patient 3, relative probabilities for Saudi Arabia were not as distinct from those for other countries using the lifetime interval: 0.394, 0.360 and 0.378, respectively, for incubation period, age at infection and jointly for incubation and age at infection. However, for this patient Saudi Arabia clearly ranked highest within the 1980-96 period: 0.859, 0.871 and 0.865, respectively, for incubation period, age at infection and jointly for incubation and age at infection. These findings support the hypothesis that human infection with bovine spongiform encephalopathy occurred in Saudi Arabia. © Her Majesty the Queen in Right of Canada 2016. Reproduced with the permission of the Minister of Public Health.

  17. Applications of the Galton Watson process to human DNA evolution and demography

    NASA Astrophysics Data System (ADS)

    Neves, Armando G. M.; Moreira, Carlos H. C.

    2006-08-01

    We show that the problem of existence of a mitochondrial Eve can be understood as an application of the Galton-Watson process and presents interesting analogies with critical phenomena in Statistical Mechanics. In the approximation of small survival probability, and assuming limited progeny, we are able to find for a genealogic tree the maximum and minimum survival probabilities over all probability distributions for the number of children per woman constrained to a given mean. As a consequence, we can relate existence of a mitochondrial Eve to quantitative demographic data of early mankind. In particular, we show that a mitochondrial Eve may exist even in an exponentially growing population, provided that the mean number of children per woman Nbar is constrained to a small range depending on the probability p that a child is a female. Assuming that the value p≈0.488 valid nowadays has remained fixed for thousands of generations, the range where a mitochondrial Eve occurs with sizeable probability is 2.0492

  18. Reward and uncertainty in exploration programs

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1971-01-01

    A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.

  19. Review of probabilistic analysis of dynamic response of systems with random parameters

    NASA Technical Reports Server (NTRS)

    Kozin, F.; Klosner, J. M.

    1989-01-01

    The various methods that have been studied in the past to allow probabilistic analysis of dynamic response for systems with random parameters are reviewed. Dynamic response may have been obtained deterministically if the variations about the nominal values were small; however, for space structures which require precise pointing, the variations about the nominal values of the structural details and of the environmental conditions are too large to be considered as negligible. These uncertainties are accounted for in terms of probability distributions about their nominal values. The quantities of concern for describing the response of the structure includes displacements, velocities, and the distributions of natural frequencies. The exact statistical characterization of the response would yield joint probability distributions for the response variables. Since the random quantities will appear as coefficients, determining the exact distributions will be difficult at best. Thus, certain approximations will have to be made. A number of techniques that are available are discussed, even in the nonlinear case. The methods that are described were: (1) Liouville's equation; (2) perturbation methods; (3) mean square approximate systems; and (4) nonlinear systems with approximation by linear systems.

  20. Double ionization of neon in elliptically polarized femtosecond laser fields

    NASA Astrophysics Data System (ADS)

    Kang, HuiPeng; Henrichs, Kevin; Wang, YanLan; Hao, XiaoLei; Eckart, Sebastian; Kunitski, Maksim; Schöffler, Markus; Jahnke, Till; Liu, XiaoJun; Dörner, Reinhard

    2018-06-01

    We present a joint experimental and theoretical investigation of the correlated electron momentum spectra from strong-field double ionization of neon induced by elliptically polarized laser pulses. A significant asymmetry of the electron momentum distributions along the major polarization axis is reported. This asymmetry depends sensitively on the laser ellipticity. Using a three-dimensional semiclassical model, we attribute this asymmetry pattern to the ellipticity-dependent probability distributions of recollision time. Our work demonstrates that, by simply varying the ellipticity, the correlated electron emission can be two-dimensionally controlled and the recolliding electron trajectories can be steered on a subcycle time scale.

  1. Generative adversarial networks for brain lesion detection

    NASA Astrophysics Data System (ADS)

    Alex, Varghese; Safwan, K. P. Mohammed; Chennamsetty, Sai Saketh; Krishnamurthi, Ganapathy

    2017-02-01

    Manual segmentation of brain lesions from Magnetic Resonance Images (MRI) is cumbersome and introduces errors due to inter-rater variability. This paper introduces a semi-supervised technique for detection of brain lesion from MRI using Generative Adversarial Networks (GANs). GANs comprises of a Generator network and a Discriminator network which are trained simultaneously with the objective of one bettering the other. The networks were trained using non lesion patches (n=13,000) from 4 different MR sequences. The network was trained on BraTS dataset and patches were extracted from regions excluding tumor region. The Generator network generates data by modeling the underlying probability distribution of the training data, (PData). The Discriminator learns the posterior probability P (Label Data) by classifying training data and generated data as "Real" or "Fake" respectively. The Generator upon learning the joint distribution, produces images/patches such that the performance of the Discriminator on them are random, i.e. P (Label Data = GeneratedData) = 0.5. During testing, the Discriminator assigns posterior probability values close to 0.5 for patches from non lesion regions, while patches centered on lesion arise from a different distribution (PLesion) and hence are assigned lower posterior probability value by the Discriminator. On the test set (n=14), the proposed technique achieves whole tumor dice score of 0.69, sensitivity of 91% and specificity of 59%. Additionally the generator network was capable of generating non lesion patches from various MR sequences.

  2. Performance of two predictive uncertainty estimation approaches for conceptual Rainfall-Runoff Model: Bayesian Joint Inference and Hydrologic Uncertainty Post-processing

    NASA Astrophysics Data System (ADS)

    Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix

    2017-04-01

    It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter estimation with the Bayesian Joint Inference methodology.

  3. Peelle's pertinent puzzle using the Monte Carlo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less

  4. Statistics of the relative velocity of particles in turbulent flows: Monodisperse particles.

    PubMed

    Bhatnagar, Akshay; Gustavsson, K; Mitra, Dhrubaditya

    2018-02-01

    We use direct numerical simulations to calculate the joint probability density function of the relative distance R and relative radial velocity component V_{R} for a pair of heavy inertial particles suspended in homogeneous and isotropic turbulent flows. At small scales the distribution is scale invariant, with a scaling exponent that is related to the particle-particle correlation dimension in phase space, D_{2}. It was argued [K. Gustavsson and B. Mehlig, Phys. Rev. E 84, 045304 (2011)PLEEE81539-375510.1103/PhysRevE.84.045304; J. Turbul. 15, 34 (2014)1468-524810.1080/14685248.2013.875188] that the scale invariant part of the distribution has two asymptotic regimes: (1) |V_{R}|≪R, where the distribution depends solely on R, and (2) |V_{R}|≫R, where the distribution is a function of |V_{R}| alone. The probability distributions in these two regimes are matched along a straight line: |V_{R}|=z^{*}R. Our simulations confirm that this is indeed correct. We further obtain D_{2} and z^{*} as a function of the Stokes number, St. The former depends nonmonotonically on St with a minimum at about St≈0.7 and the latter has only a weak dependence on St.

  5. Statistics of the relative velocity of particles in turbulent flows: Monodisperse particles

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Akshay; Gustavsson, K.; Mitra, Dhrubaditya

    2018-02-01

    We use direct numerical simulations to calculate the joint probability density function of the relative distance R and relative radial velocity component VR for a pair of heavy inertial particles suspended in homogeneous and isotropic turbulent flows. At small scales the distribution is scale invariant, with a scaling exponent that is related to the particle-particle correlation dimension in phase space, D2. It was argued [K. Gustavsson and B. Mehlig, Phys. Rev. E 84, 045304 (2011), 10.1103/PhysRevE.84.045304; J. Turbul. 15, 34 (2014), 10.1080/14685248.2013.875188] that the scale invariant part of the distribution has two asymptotic regimes: (1) | VR|≪R , where the distribution depends solely on R , and (2) | VR|≫R , where the distribution is a function of | VR| alone. The probability distributions in these two regimes are matched along a straight line: | VR|= z*R . Our simulations confirm that this is indeed correct. We further obtain D2 and z* as a function of the Stokes number, St. The former depends nonmonotonically on St with a minimum at about St≈0.7 and the latter has only a weak dependence on St.

  6. Statistics of Stokes variables for correlated Gaussian fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliyahu, D.

    1994-09-01

    The joint and marginal probability distribution functions of the Stokes variables are derived for correlated Gaussian fields [an extension of D. Eliyahu, Phys. Rev. E 47, 2881 (1993)]. The statistics depend only on the first moment (averaged) Stokes variables and have a universal form for [ital S][sub 1], [ital S][sub 2], and [ital S][sub 3]. The statistics of the variables describing the Cartesian coordinates of the Poincare sphere are given also.

  7. Joint Probability Analysis of Extreme Precipitation and Storm Tide in a Coastal City under Changing Environment

    PubMed Central

    Xu, Kui; Ma, Chao; Lian, Jijian; Bin, Lingling

    2014-01-01

    Catastrophic flooding resulting from extreme meteorological events has occurred more frequently and drawn great attention in recent years in China. In coastal areas, extreme precipitation and storm tide are both inducing factors of flooding and therefore their joint probability would be critical to determine the flooding risk. The impact of storm tide or changing environment on flooding is ignored or underestimated in the design of drainage systems of today in coastal areas in China. This paper investigates the joint probability of extreme precipitation and storm tide and its change using copula-based models in Fuzhou City. The change point at the year of 1984 detected by Mann-Kendall and Pettitt’s tests divides the extreme precipitation series into two subsequences. For each subsequence the probability of the joint behavior of extreme precipitation and storm tide is estimated by the optimal copula. Results show that the joint probability has increased by more than 300% on average after 1984 (α = 0.05). The design joint return period (RP) of extreme precipitation and storm tide is estimated to propose a design standard for future flooding preparedness. For a combination of extreme precipitation and storm tide, the design joint RP has become smaller than before. It implies that flooding would happen more often after 1984, which corresponds with the observation. The study would facilitate understanding the change of flood risk and proposing the adaption measures for coastal areas under a changing environment. PMID:25310006

  8. Joint probability analysis of extreme precipitation and storm tide in a coastal city under changing environment.

    PubMed

    Xu, Kui; Ma, Chao; Lian, Jijian; Bin, Lingling

    2014-01-01

    Catastrophic flooding resulting from extreme meteorological events has occurred more frequently and drawn great attention in recent years in China. In coastal areas, extreme precipitation and storm tide are both inducing factors of flooding and therefore their joint probability would be critical to determine the flooding risk. The impact of storm tide or changing environment on flooding is ignored or underestimated in the design of drainage systems of today in coastal areas in China. This paper investigates the joint probability of extreme precipitation and storm tide and its change using copula-based models in Fuzhou City. The change point at the year of 1984 detected by Mann-Kendall and Pettitt's tests divides the extreme precipitation series into two subsequences. For each subsequence the probability of the joint behavior of extreme precipitation and storm tide is estimated by the optimal copula. Results show that the joint probability has increased by more than 300% on average after 1984 (α = 0.05). The design joint return period (RP) of extreme precipitation and storm tide is estimated to propose a design standard for future flooding preparedness. For a combination of extreme precipitation and storm tide, the design joint RP has become smaller than before. It implies that flooding would happen more often after 1984, which corresponds with the observation. The study would facilitate understanding the change of flood risk and proposing the adaption measures for coastal areas under a changing environment.

  9. Meaner king uses biased bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reimpell, Michael; Werner, Reinhard F.

    2007-06-15

    The mean king problem is a quantum mechanical retrodiction problem, in which Alice has to name the outcome of an ideal measurement made in one of several different orthonormal bases. Alice is allowed to prepare the state of the system and to do a final measurement, possibly including an entangled copy. However, Alice gains knowledge about which basis was measured only after she no longer has access to the quantum system or its copy. We give a necessary and sufficient condition on the bases, for Alice to have a strategy to solve this problem, without assuming that the bases aremore » mutually unbiased. The condition requires the existence of an overall joint probability distribution for random variables, whose marginal pair distributions are fixed as the transition probability matrices of the given bases. In particular, in the qubit case the problem is decided by Bell's original three variable inequality. In the standard setting of mutually unbiased bases, when they do exist, Alice can always succeed. However, for randomly chosen bases her success probability rapidly goes to zero with increasing dimension.« less

  10. Meaner king uses biased bases

    NASA Astrophysics Data System (ADS)

    Reimpell, Michael; Werner, Reinhard F.

    2007-06-01

    The mean king problem is a quantum mechanical retrodiction problem, in which Alice has to name the outcome of an ideal measurement made in one of several different orthonormal bases. Alice is allowed to prepare the state of the system and to do a final measurement, possibly including an entangled copy. However, Alice gains knowledge about which basis was measured only after she no longer has access to the quantum system or its copy. We give a necessary and sufficient condition on the bases, for Alice to have a strategy to solve this problem, without assuming that the bases are mutually unbiased. The condition requires the existence of an overall joint probability distribution for random variables, whose marginal pair distributions are fixed as the transition probability matrices of the given bases. In particular, in the qubit case the problem is decided by Bell’s original three variable inequality. In the standard setting of mutually unbiased bases, when they do exist, Alice can always succeed. However, for randomly chosen bases her success probability rapidly goes to zero with increasing dimension.

  11. Non-renewal statistics for electron transport in a molecular junction with electron-vibration interaction

    NASA Astrophysics Data System (ADS)

    Kosov, Daniel S.

    2017-09-01

    Quantum transport of electrons through a molecule is a series of individual electron tunneling events separated by stochastic waiting time intervals. We study the emergence of temporal correlations between successive waiting times for the electron transport in a vibrating molecular junction. Using the master equation approach, we compute the joint probability distribution for waiting times of two successive tunneling events. We show that the probability distribution is completely reset after each tunneling event if molecular vibrations are thermally equilibrated. If we treat vibrational dynamics exactly without imposing the equilibration constraint, the statistics of electron tunneling events become non-renewal. Non-renewal statistics between two waiting times τ1 and τ2 means that the density matrix of the molecule is not fully renewed after time τ1 and the probability of observing waiting time τ2 for the second electron transfer depends on the previous electron waiting time τ1. The strong electron-vibration coupling is required for the emergence of the non-renewal statistics. We show that in the Franck-Condon blockade regime, extremely rare tunneling events become positively correlated.

  12. Product Distribution Theory for Control of Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Lee, Chia Fan; Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is a new framework for controlling Multi-Agent Systems (MAS's). First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint stare of the agents. Accordingly we can consider a team game in which the shared utility is a performance measure of the behavior of the MAS. For such a scenario the game is at equilibrium - the Lagrangian is optimized - when the joint distribution of the agents optimizes the system's expected performance. One common way to find that equilibrium is to have each agent run a reinforcement learning algorithm. Here we investigate the alternative of exploiting PD theory to run gradient descent on the Lagrangian. We present computer experiments validating some of the predictions of PD theory for how best to do that gradient descent. We also demonstrate how PD theory can improve performance even when we are not allowed to rerun the MAS from different initial conditions, a requirement implicit in some previous work.

  13. Impact of communities, health, and emotional-related factors on smoking use: comparison of joint modeling of mean and dispersion and Bayes' hierarchical models on add health survey.

    PubMed

    Pu, Jie; Fang, Di; Wilson, Jeffrey R

    2017-02-03

    The analysis of correlated binary data is commonly addressed through the use of conditional models with random effects included in the systematic component as opposed to generalized estimating equations (GEE) models that addressed the random component. Since the joint distribution of the observations is usually unknown, the conditional distribution is a natural approach. Our objective was to compare the fit of different binary models for correlated data in Tabaco use. We advocate that the joint modeling of the mean and dispersion may be at times just as adequate. We assessed the ability of these models to account for the intraclass correlation. In so doing, we concentrated on fitting logistic regression models to address smoking behaviors. Frequentist and Bayes' hierarchical models were used to predict conditional probabilities, and the joint modeling (GLM and GAM) models were used to predict marginal probabilities. These models were fitted to National Longitudinal Study of Adolescent to Adult Health (Add Health) data for Tabaco use. We found that people were less likely to smoke if they had higher income, high school or higher education and religious. Individuals were more likely to smoke if they had abused drug or alcohol, spent more time on TV and video games, and been arrested. Moreover, individuals who drank alcohol early in life were more likely to be a regular smoker. Children who experienced mistreatment from their parents were more likely to use Tabaco regularly. The joint modeling of the mean and dispersion models offered a flexible and meaningful method of addressing the intraclass correlation. They do not require one to identify random effects nor distinguish from one level of the hierarchy to the other. Moreover, once one can identify the significant random effects, one can obtain similar results to the random coefficient models. We found that the set of marginal models accounting for extravariation through the additional dispersion submodel produced similar results with regards to inferences and predictions. Moreover, both marginal and conditional models demonstrated similar predictive power.

  14. Joint time/frequency-domain inversion of reflection data for seabed geoacoustic profiles and uncertainties.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2008-03-01

    This paper develops a joint time/frequency-domain inversion for high-resolution single-bounce reflection data, with the potential to resolve fine-scale profiles of sediment velocity, density, and attenuation over small seafloor footprints (approximately 100 m). The approach utilizes sequential Bayesian inversion of time- and frequency-domain reflection data, employing ray-tracing inversion for reflection travel times and a layer-packet stripping method for spherical-wave reflection-coefficient inversion. Posterior credibility intervals from the travel-time inversion are passed on as prior information to the reflection-coefficient inversion. Within the reflection-coefficient inversion, parameter information is passed from one layer packet inversion to the next in terms of marginal probability distributions rotated into principal components, providing an efficient approach to (partially) account for multi-dimensional parameter correlations with one-dimensional, numerical distributions. Quantitative geoacoustic parameter uncertainties are provided by a nonlinear Gibbs sampling approach employing full data error covariance estimation (including nonstationary effects) and accounting for possible biases in travel-time picks. Posterior examination of data residuals shows the importance of including data covariance estimates in the inversion. The joint inversion is applied to data collected on the Malta Plateau during the SCARAB98 experiment.

  15. The Contextuality Loophole is Fatal for the Derivation of Bell Inequalities: Reply to a Comment by I. Schmelzer

    NASA Astrophysics Data System (ADS)

    Nieuwenhuizen, Theodorus M.; Kupczynski, Marian

    2017-02-01

    Ilya Schmelzer wrote recently: Nieuwenhuizen argued that there exists some "contextuality loophole" in Bell's theorem. This claim in unjustified. It is made clear that this arose from attaching a meaning to the title and the content of the paper different from the one intended by Nieuwenhuizen. "Contextual loophole" means only that if the supplementary parameters describing measuring instruments are correctly introduced, Bell and Bell-type inequalities may not be proven. It is also stressed that a hidden variable model suffers from a "contextuality loophole" if it tries to describe different sets of incompatible experiments using a unique probability space and a unique joint probability distribution.

  16. A New Approach in Generating Meteorological Forecasts for Ensemble Streamflow Forecasting using Multivariate Functions

    NASA Astrophysics Data System (ADS)

    Khajehei, S.; Madadgar, S.; Moradkhani, H.

    2014-12-01

    The reliability and accuracy of hydrological predictions are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model parameters and model structure. To reduce the total uncertainty in hydrological applications, one approach is to reduce the uncertainty in meteorological forcing by using the statistical methods based on the conditional probability density functions (pdf). However, one of the requirements for current methods is to assume the Gaussian distribution for the marginal distribution of the observed and modeled meteorology. Here we propose a Bayesian approach based on Copula functions to develop the conditional distribution of precipitation forecast needed in deriving a hydrologic model for a sub-basin in the Columbia River Basin. Copula functions are introduced as an alternative approach in capturing the uncertainties related to meteorological forcing. Copulas are multivariate joint distribution of univariate marginal distributions, which are capable to model the joint behavior of variables with any level of correlation and dependency. The method is applied to the monthly forecast of CPC with 0.25x0.25 degree resolution to reproduce the PRISM dataset over 1970-2000. Results are compared with Ensemble Pre-Processor approach as a common procedure used by National Weather Service River forecast centers in reproducing observed climatology during a ten-year verification period (2000-2010).

  17. Optimized Vertex Method and Hybrid Reliability

    NASA Technical Reports Server (NTRS)

    Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.

    2002-01-01

    A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.

  18. A stochastic diffusion process for Lochner's generalized Dirichlet distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-10-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less

  19. Modeling Multiple Risks: Hidden Domain of Attraction

    DTIC Science & Technology

    2012-01-01

    improve joint tail probability approximation but the deficiency can be remedied by a more general approach which we call hidden domain of attraction ( HDA ...HRV is a special case of HDA . If the distribution of X does not have MRV but (1.2) still holds, we may retrieve the MRV setup by transforming the...potential advantage in some circumstances of the notion of HDA is that it does not require that we transform components. Performing such transformations on

  20. Experimental non-classicality of an indivisible quantum system.

    PubMed

    Lapkiewicz, Radek; Li, Peizhe; Schaeff, Christoph; Langford, Nathan K; Ramelow, Sven; Wieśniak, Marcin; Zeilinger, Anton

    2011-06-22

    In contrast to classical physics, quantum theory demands that not all properties can be simultaneously well defined; the Heisenberg uncertainty principle is a manifestation of this fact. Alternatives have been explored--notably theories relying on joint probability distributions or non-contextual hidden-variable models, in which the properties of a system are defined independently of their own measurement and any other measurements that are made. Various deep theoretical results imply that such theories are in conflict with quantum mechanics. Simpler cases demonstrating this conflict have been found and tested experimentally with pairs of quantum bits (qubits). Recently, an inequality satisfied by non-contextual hidden-variable models and violated by quantum mechanics for all states of two qubits was introduced and tested experimentally. A single three-state system (a qutrit) is the simplest system in which such a contradiction is possible; moreover, the contradiction cannot result from entanglement between subsystems, because such a three-state system is indivisible. Here we report an experiment with single photonic qutrits which provides evidence that no joint probability distribution describing the outcomes of all possible measurements--and, therefore, no non-contextual theory--can exist. Specifically, we observe a violation of the Bell-type inequality found by Klyachko, Can, Binicioğlu and Shumovsky. Our results illustrate a deep incompatibility between quantum mechanics and classical physics that cannot in any way result from entanglement.

  1. Hydrologic risk analysis in the Yangtze River basin through coupling Gaussian mixtures into copulas

    NASA Astrophysics Data System (ADS)

    Fan, Y. R.; Huang, W. W.; Huang, G. H.; Li, Y. P.; Huang, K.; Li, Z.

    2016-02-01

    In this study, a bivariate hydrologic risk framework is proposed through coupling Gaussian mixtures into copulas, leading to a coupled GMM-copula method. In the coupled GMM-Copula method, the marginal distributions of flood peak, volume and duration are quantified through Gaussian mixture models and the joint probability distributions of flood peak-volume, peak-duration and volume-duration are established through copulas. The bivariate hydrologic risk is then derived based on the joint return period of flood variable pairs. The proposed method is applied to the risk analysis for the Yichang station on the main stream of the Yangtze River, China. The results indicate that (i) the bivariate risk for flood peak-volume would keep constant for the flood volume less than 1.0 × 105 m3/s day, but present a significant decreasing trend for the flood volume larger than 1.7 × 105 m3/s day; and (ii) the bivariate risk for flood peak-duration would not change significantly for the flood duration less than 8 days, and then decrease significantly as duration value become larger. The probability density functions (pdfs) of the flood volume and duration conditional on flood peak can also be generated through the fitted copulas. The results indicate that the conditional pdfs of flood volume and duration follow bimodal distributions, with the occurrence frequency of the first vertex decreasing and the latter one increasing as the increase of flood peak. The obtained conclusions from the bivariate hydrologic analysis can provide decision support for flood control and mitigation.

  2. Improving Constraints on Climate System Properties withAdditional Data and New Statistical and Sampling Methods

    NASA Astrophysics Data System (ADS)

    Forest, C. E.; Libardoni, A. G.; Sokolov, A. P.; Monier, E.

    2017-12-01

    We use the updated MIT Earth System Model (MESM) to derive the joint probability distribution function for Equilibrium Climate sensitivity (S), an effective heat diffusivity (Kv), and the net aerosol forcing (Faer). Using a new 1800-member ensemble of MESM runs, we derive PDFs by comparing model outputs against historical observations of surface temperature and global mean ocean heat content. We focus on how changes in (i) the MESM model, (ii) recent surface temperature and ocean heat content observations, and (iii) estimates of internal climate variability will all contribute to uncertainties. We show that estimates of S increase and Faer is less negative. These shifts result partly from new model forcing inputs but also from including recent temperature records that lead to higher values of S and Kv. We show that the parameter distributions are sensitive to the internal variability in the climate system. When considering these factors, we derive our best estimate for the joint probability distribution for the climate system properties. We estimate the 90-percent confidence intervals for climate sensitivity as 2.7-5.4 oC with a mode of 3.5 oC, for Kv as 1.9-23.0 cm2 s-1 with a mode of 4.41 cm2 s-1, and for Faer as -0.4 - -0.04 Wm-2 with a mode of -0.25 Wm-2. Lastly, we estimate TCR to be between 1.4 and 2.1 oC with a mode of 1.8 oC.

  3. Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas

    PubMed Central

    Bedford, Tim; Daneshkhah, Alireza

    2015-01-01

    Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets. PMID:26332240

  4. Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Abe, Sumiyoshi

    2014-11-01

    The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.

  5. Distribution of G concurrence of random pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappellini, Valerio; Sommers, Hans-Juergen; Zyczkowski, Karol

    2006-12-15

    The average entanglement of random pure states of an NxN composite system is analyzed. We compute the average value of the determinant D of the reduced state, which forms an entanglement monotone. Calculating higher moments of the determinant, we characterize the probability distribution P(D). Similar results are obtained for the rescaled Nth root of the determinant, called the G concurrence. We show that in the limit N{yields}{infinity} this quantity becomes concentrated at a single point G{sub *}=1/e. The position of the concentration point changes if one consider an arbitrary NxK bipartite system, in the joint limit N,K{yields}{infinity}, with K/N fixed.

  6. Role of beach morphology in wave overtopping hazard assessment

    NASA Astrophysics Data System (ADS)

    Phillips, Benjamin; Brown, Jennifer; Bidlot, Jean-Raymond; Plater, Andrew

    2017-04-01

    Understanding the role of beach morphology in controlling wave overtopping volume will further minimise uncertainties in flood risk assessments at coastal locations defended by engineered structures worldwide. XBeach is used to model wave overtopping volume for a 1:200 yr joint probability distribution of waves and water levels with measured, pre- and post-storm beach profiles. The simulation with measured bathymetry is repeated with and without morphological evolution enabled during the modelled storm event. This research assesses the role of morphology in controlling wave overtopping volumes for hazardous events that meet the typical design level of coastal defence structures. Results show disabling storm-driven morphology under-represents modelled wave overtopping volumes by up to 39% under high Hs conditions, and has a greater impact on the wave overtopping rate than the variability applied within the boundary conditions due to the range of wave-water level combinations that meet the 1:200 yr joint probability criterion. Accounting for morphology in flood modelling is therefore critical for accurately predicting wave overtopping volumes and the resulting flood hazard and to assess economic losses.

  7. A framework for conducting mechanistic based reliability assessments of components operating in complex systems

    NASA Astrophysics Data System (ADS)

    Wallace, Jon Michael

    2003-10-01

    Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.

  8. Stochastic optimal operation of reservoirs based on copula functions

    NASA Astrophysics Data System (ADS)

    Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen

    2018-02-01

    Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.

  9. Probabilistic graphs as a conceptual and computational tool in hydrology and water management

    NASA Astrophysics Data System (ADS)

    Schoups, Gerrit

    2014-05-01

    Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Toomey, Bridget

    Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less

  11. On the apparent insignificance of the randomness of flexible joints on large space truss dynamics

    NASA Technical Reports Server (NTRS)

    Koch, R. M.; Klosner, J. M.

    1993-01-01

    Deployable periodic large space structures have been shown to exhibit high dynamic sensitivity to period-breaking imperfections and uncertainties. These can be brought on by manufacturing or assembly errors, structural imperfections, as well as nonlinear and/or nonconservative joint behavior. In addition, the necessity of precise pointing and position capability can require the consideration of these usually negligible and unknown parametric uncertainties and their effect on the overall dynamic response of large space structures. This work describes the use of a new design approach for the global dynamic solution of beam-like periodic space structures possessing parametric uncertainties. Specifically, the effect of random flexible joints on the free vibrations of simply-supported periodic large space trusses is considered. The formulation is a hybrid approach in terms of an extended Timoshenko beam continuum model, Monte Carlo simulation scheme, and first-order perturbation methods. The mean and mean-square response statistics for a variety of free random vibration problems are derived for various input random joint stiffness probability distributions. The results of this effort show that, although joint flexibility has a substantial effect on the modal dynamic response of periodic large space trusses, the effect of any reasonable uncertainty or randomness associated with these joint flexibilities is insignificant.

  12. Potential Use of a Bayesian Network for Discriminating Flash Type from Future GOES-R Geostationary Lightning Mapper (GLM) data

    NASA Technical Reports Server (NTRS)

    Solakiewiz, Richard; Koshak, William

    2008-01-01

    Continuous monitoring of the ratio of cloud flashes to ground flashes may provide a better understanding of thunderstorm dynamics, intensification, and evolution, and it may be useful in severe weather warning. The National Lighting Detection Network TM (NLDN) senses ground flashes with exceptional detection efficiency and accuracy over most of the continental United States. A proposed Geostationary Lightning Mapper (GLM) aboard the Geostationary Operational Environmental Satellite (GOES-R) will look at the western hemisphere, and among the lightning data products to be made available will be the fundamental optical flash parameters for both cloud and ground flashes: radiance, area, duration, number of optical groups, and number of optical events. Previous studies have demonstrated that the optical flash parameter statistics of ground and cloud lightning, which are observable from space, are significantly different. This study investigates a Bayesian network methodology for discriminating lightning flash type (ground or cloud) using the lightning optical data and ancillary GOES-R data. A Directed Acyclic Graph (DAG) is set up with lightning as a "root" and data observed by GLM as the "leaves." This allows for a direct calculation of the joint probability distribution function for the lighting type and radiance, area, etc. Initially, the conditional probabilities that will be required can be estimated from the Lightning Imaging Sensor (LIS) and the Optical Transient Detector (OTD) together with NLDN data. Directly manipulating the joint distribution will yield the conditional probability that a lightning flash is a ground flash given the evidence, which consists of the observed lightning optical data [and possibly cloud data retrieved from the GOES-R Advanced Baseline Imager (ABI) in a more mature Bayesian network configuration]. Later, actual GLM and NLDN data can be used to refine the estimates of the conditional probabilities used in the model; i.e., the Bayesian network is a learning network. Methods for efficient calculation of the conditional probabilities (e.g., an algorithm using junction trees), finding data conflicts, goodness of fit, and dealing with missing data will also be addressed.

  13. Bivariate categorical data analysis using normal linear conditional multinomial probability model.

    PubMed

    Sun, Bingrui; Sutradhar, Brajendra

    2015-02-10

    Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Air Asset to Mission Assignment for Dynamic High-Threat Environments in Real-Time

    DTIC Science & Technology

    2015-03-01

    39 Initial Distribution List 41 viii List of Figures Figure 2.1 Joint Air Tasking Cycle (JCS 2014). An iterative 120-hour cycle for planners within the...minutes of on- staion time, or “playtime”, with a total of two GBU -16 laser-guided bomb (LGB) and an Advanced Targeting Forward Looking Infrared (ATFLIR...proba- bility of survival against the SA-2 and SA-3 systems, respectively. A GBU -16 LGB has no standoff capability and 90%, 60%, and 70% probability of

  15. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    NASA Astrophysics Data System (ADS)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  16. The Statistics of Urban Scaling and Their Connection to Zipf’s Law

    PubMed Central

    Gomez-Lievano, Andres; Youn, HyeJin; Bettencourt, Luís M. A.

    2012-01-01

    Urban scaling relations characterizing how diverse properties of cities vary on average with their population size have recently been shown to be a general quantitative property of many urban systems around the world. However, in previous studies the statistics of urban indicators were not analyzed in detail, raising important questions about the full characterization of urban properties and how scaling relations may emerge in these larger contexts. Here, we build a self-consistent statistical framework that characterizes the joint probability distributions of urban indicators and city population sizes across an urban system. To develop this framework empirically we use one of the most granular and stochastic urban indicators available, specifically measuring homicides in cities of Brazil, Colombia and Mexico, three nations with high and fast changing rates of violent crime. We use these data to derive the conditional probability of the number of homicides per year given the population size of a city. To do this we use Bayes’ rule together with the estimated conditional probability of city size given their number of homicides and the distribution of total homicides. We then show that scaling laws emerge as expectation values of these conditional statistics. Knowledge of these distributions implies, in turn, a relationship between scaling and population size distribution exponents that can be used to predict Zipf’s exponent from urban indicator statistics. Our results also suggest how a general statistical theory of urban indicators may be constructed from the stochastic dynamics of social interaction processes in cities. PMID:22815745

  17. Management of Water Quantity and Quality Based on Copula for a Tributary to Miyun Reservoir, Beijing

    NASA Astrophysics Data System (ADS)

    Zang, N.; Wang, X.; Liang, P.

    2017-12-01

    Due to the complex mutual influence between water quantity and water quality of river, it is difficult to reflect the actual characters of the tributaries to reservoir. In this study, the acceptable marginal probability distributions for water quantity and quality of reservoir inflow were calculated. A bivariate Archimedean copula was further applied to establish the joint distribution function of them. Then multiple combination scenarios of water quantity and water quality were designed to analyze their coexistence relationship and reservoir management strategies. Taking Bai river, an important tributary into the Miyun Reservoir, as a study case. The results showed that it is feasible to apply Frank copula function to describe the jointed distribution function of water quality and water quantity for Bai river. Furthermore, the monitoring of TP concentration needs to be strengthen in Bai river. This methodology can be extended to larger dimensions and is transferable to other reservoirs via establishment of models with relevant data for a particular area. Our findings help better analyzing the coexistence relationship and influence degree of the water quantity and quality of the tributary to reservoir for the purpose of water resources protection.

  18. A Process-Based Transport-Distance Model of Aeolian Transport

    NASA Astrophysics Data System (ADS)

    Naylor, A. K.; Okin, G.; Wainwright, J.; Parsons, A. J.

    2017-12-01

    We present a new approach to modeling aeolian transport based on transport distance. Particle fluxes are based on statistical probabilities of particle detachment and distributions of transport lengths, which are functions of particle size classes. A computational saltation model is used to simulate transport distances over a variety of sizes. These are fit to an exponential distribution, which has the advantages of computational economy, concordance with current field measurements, and a meaningful relationship to theoretical assumptions about mean and median particle transport distance. This novel approach includes particle-particle interactions, which are important for sustaining aeolian transport and dust emission. Results from this model are compared with results from both bulk- and particle-sized-specific transport equations as well as empirical wind tunnel studies. The transport-distance approach has been successfully used for hydraulic processes, and extending this methodology from hydraulic to aeolian transport opens up the possibility of modeling joint transport by wind and water using consistent physics. Particularly in nutrient-limited environments, modeling the joint action of aeolian and hydraulic transport is essential for understanding the spatial distribution of biomass across landscapes and how it responds to climatic variability and change.

  19. How weak values emerge in joint measurements on cloned quantum systems.

    PubMed

    Hofmann, Holger F

    2012-07-13

    A statistical analysis of optimal universal cloning shows that it is possible to identify an ideal (but nonpositive) copying process that faithfully maps all properties of the original Hilbert space onto two separate quantum systems, resulting in perfect correlations for all observables. The joint probabilities for noncommuting measurements on separate clones then correspond to the real parts of the complex joint probabilities observed in weak measurements on a single system, where the measurements on the two clones replace the corresponding sequence of weak measurement and postselection. The imaginary parts of weak measurement statics can be obtained by replacing the cloning process with a partial swap operation. A controlled-swap operation combines both processes, making the complete weak measurement statistics accessible as a well-defined contribution to the joint probabilities of fully resolved projective measurements on the two output systems.

  20. Statistical Analysis of Stress Signals from Bridge Monitoring by FBG System.

    PubMed

    Ye, Xiao-Wei; Su, You-Hua; Xi, Pei-Sen

    2018-02-07

    In this paper, a fiber Bragg grating (FBG)-based stress monitoring system instrumented on an orthotropic steel deck arch bridge is demonstrated. The FBG sensors are installed at two types of critical fatigue-prone welded joints to measure the strain and temperature signals. A total of 64 FBG sensors are deployed around the rib-to-deck and rib-to-diagram areas at the mid-span and quarter-span of the investigated orthotropic steel bridge. The local stress behaviors caused by the highway loading and temperature effect during the construction and operation periods are presented with the aid of a wavelet multi-resolution analysis approach. In addition, the multi-modal characteristic of the rainflow counted stress spectrum is modeled by the method of finite mixture distribution together with a genetic algorithm (GA)-based parameter estimation approach. The optimal probability distribution of the stress spectrum is determined by use of Bayesian information criterion (BIC). Furthermore, the hot spot stress of the welded joint is calculated by an extrapolation method recommended in the specification of International Institute of Welding (IIW). The stochastic characteristic of stress concentration factor (SCF) of the concerned welded joint is addressed. The proposed FBG-based stress monitoring system and probabilistic stress evaluation methods can provide an effective tool for structural monitoring and condition assessment of orthotropic steel bridges.

  1. Impact of mechanical heterogeneity on joint density in a welded ignimbrite

    NASA Astrophysics Data System (ADS)

    Soden, A. M.; Lunn, R. J.; Shipton, Z. K.

    2016-08-01

    Joints are conduits for groundwater, hydrocarbons and hydrothermal fluids. Robust fluid flow models rely on accurate characterisation of joint networks, in particular joint density. It is generally assumed that the predominant factor controlling joint density in layered stratigraphy is the thickness of the mechanical layer where the joints occur. Mechanical heterogeneity within the layer is considered a lesser influence on joint formation. We analysed the frequency and distribution of joints within a single 12-m thick ignimbrite layer to identify the controls on joint geometry and distribution. The observed joint distribution is not related to the thickness of the ignimbrite layer. Rather, joint initiation, propagation and termination are controlled by the shape, spatial distribution and mechanical properties of fiamme, which are present within the ignimbrite. The observations and analysis presented here demonstrate that models of joint distribution, particularly in thicker layers, that do not fully account for mechanical heterogeneity are likely to underestimate joint density, the spatial variability of joint distribution and the complex joint geometries that result. Consequently, we recommend that characterisation of a layer's compositional and material properties improves predictions of subsurface joint density in rock layers that are mechanically heterogeneous.

  2. Probability distribution and statistical properties of spherically compensated cosmic regions in ΛCDM cosmology

    NASA Astrophysics Data System (ADS)

    Alimi, Jean-Michel; de Fromont, Paul

    2018-04-01

    The statistical properties of cosmic structures are well known to be strong probes for cosmology. In particular, several studies tried to use the cosmic void counting number to obtain tight constrains on dark energy. In this paper, we model the statistical properties of these regions using the CoSphere formalism (de Fromont & Alimi) in both primordial and non-linearly evolved Universe in the standard Λ cold dark matter model. This formalism applies similarly for minima (voids) and maxima (such as DM haloes), which are here considered symmetrically. We first derive the full joint Gaussian distribution of CoSphere's parameters in the Gaussian random field. We recover the results of Bardeen et al. only in the limit where the compensation radius becomes very large, i.e. when the central extremum decouples from its cosmic environment. We compute the probability distribution of the compensation size in this primordial field. We show that this distribution is redshift independent and can be used to model cosmic voids size distribution. We also derive the statistical distribution of the peak parameters introduced by Bardeen et al. and discuss their correlation with the cosmic environment. We show that small central extrema with low density are associated with narrow compensation regions with deep compensation density, while higher central extrema are preferentially located in larger but smoother over/under massive regions.

  3. A concise evidence-based physical examination for diagnosis of acromioclavicular joint pathology: a systematic review.

    PubMed

    Krill, Michael K; Rosas, Samuel; Kwon, KiHyun; Dakkak, Andrew; Nwachukwu, Benedict U; McCormick, Frank

    2018-02-01

    The clinical examination of the shoulder joint is an undervalued diagnostic tool for evaluating acromioclavicular (AC) joint pathology. Applying evidence-based clinical tests enables providers to make an accurate diagnosis and minimize costly imaging procedures and potential delays in care. The purpose of this study was to create a decision tree analysis enabling simple and accurate diagnosis of AC joint pathology. A systematic review of the Medline, Ovid and Cochrane Review databases was performed to identify level one and two diagnostic studies evaluating clinical tests for AC joint pathology. Individual test characteristics were combined in series and in parallel to improve sensitivities and specificities. A secondary analysis utilized subjective pre-test probabilities to create a clinical decision tree algorithm with post-test probabilities. The optimal special test combination to screen and confirm AC joint pathology combined Paxinos sign and O'Brien's Test, with a specificity of 95.8% when performed in series; whereas, Paxinos sign and Hawkins-Kennedy Test demonstrated a sensitivity of 93.7% when performed in parallel. Paxinos sign and O'Brien's Test demonstrated the greatest positive likelihood ratio (2.71); whereas, Paxinos sign and Hawkins-Kennedy Test reported the lowest negative likelihood ratio (0.35). No combination of special tests performed in series or in parallel creates more than a small impact on post-test probabilities to screen or confirm AC joint pathology. Paxinos sign and O'Brien's Test is the only special test combination that has a small and sometimes important impact when used both in series and in parallel. Physical examination testing is not beneficial for diagnosis of AC joint pathology when pretest probability is unequivocal. In these instances, it is of benefit to proceed with procedural tests to evaluate AC joint pathology. Ultrasound-guided corticosteroid injections are diagnostic and therapeutic. An ultrasound-guided AC joint corticosteroid injection may be an appropriate new standard for treatment and surgical decision-making. II - Systematic Review.

  4. Throughput assurance of wireless body area networks coexistence based on stochastic geometry

    PubMed Central

    Wang, Yinglong; Shu, Minglei; Wu, Shangbin

    2017-01-01

    Wireless body area networks (WBANs) are expected to influence the traditional medical model by assisting caretakers with health telemonitoring. Within WBANs, the transmit power of the nodes should be as small as possible owing to their limited energy capacity but should be sufficiently large to guarantee the quality of the signal at the receiving nodes. When multiple WBANs coexist in a small area, the communication reliability and overall throughput can be seriously affected due to resource competition and interference. We show that the total network throughput largely depends on the WBANs distribution density (λp), transmit power of their nodes (Pt), and their carrier-sensing threshold (γ). Using stochastic geometry, a joint carrier-sensing threshold and power control strategy is proposed to meet the demand of coexisting WBANs based on the IEEE 802.15.4 standard. Given different network distributions and carrier-sensing thresholds, the proposed strategy derives a minimum transmit power according to varying surrounding environment. We obtain expressions for transmission success probability and throughput adopting this strategy. Using numerical examples, we show that joint carrier-sensing thresholds and transmit power strategy can effectively improve the overall system throughput and reduce interference. Additionally, this paper studies the effects of a guard zone on the throughput using a Matern hard-core point process (HCPP) type II model. Theoretical analysis and simulation results show that the HCPP model can increase the success probability and throughput of networks. PMID:28141841

  5. Supervised Detection of Anomalous Light Curves in Massive Astronomical Catalogs

    NASA Astrophysics Data System (ADS)

    Nun, Isadora; Pichara, Karim; Protopapas, Pavlos; Kim, Dae-Won

    2014-09-01

    The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. In order to process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new methodology to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each of the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. By leaving out one of the classes on the training set, we perform a validity test and show that when the random forest classifier attempts to classify unknown light curves (the class left out), it votes with an unusual distribution among the classes. This rare voting is detected by the Bayesian network and expressed as a low joint probability. Our method is suitable for exploring massive data sets given that the training process is performed offline. We tested our algorithm on 20 million light curves from the MACHO catalog and generated a list of anomalous candidates. After analysis, we divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration, or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post-analysis stage by performing a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables, and X-ray sources. For some outliers there was no additional information. Among them we identified three unknown variability types and a few individual outliers that will be followed up in order to perform a deeper analysis.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nun, Isadora; Pichara, Karim; Protopapas, Pavlos

    The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. In order to process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new methodology to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each ofmore » the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. By leaving out one of the classes on the training set, we perform a validity test and show that when the random forest classifier attempts to classify unknown light curves (the class left out), it votes with an unusual distribution among the classes. This rare voting is detected by the Bayesian network and expressed as a low joint probability. Our method is suitable for exploring massive data sets given that the training process is performed offline. We tested our algorithm on 20 million light curves from the MACHO catalog and generated a list of anomalous candidates. After analysis, we divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration, or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post-analysis stage by performing a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables, and X-ray sources. For some outliers there was no additional information. Among them we identified three unknown variability types and a few individual outliers that will be followed up in order to perform a deeper analysis.« less

  7. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  8. Bayesian calibration of mechanistic aquatic biogeochemical models and benefits for environmental management

    NASA Astrophysics Data System (ADS)

    Arhonditsis, George B.; Papantou, Dimitra; Zhang, Weitao; Perhar, Gurbir; Massos, Evangelia; Shi, Molu

    2008-09-01

    Aquatic biogeochemical models have been an indispensable tool for addressing pressing environmental issues, e.g., understanding oceanic response to climate change, elucidation of the interplay between plankton dynamics and atmospheric CO 2 levels, and examination of alternative management schemes for eutrophication control. Their ability to form the scientific basis for environmental management decisions can be undermined by the underlying structural and parametric uncertainty. In this study, we outline how we can attain realistic predictive links between management actions and ecosystem response through a probabilistic framework that accommodates rigorous uncertainty analysis of a variety of error sources, i.e., measurement error, parameter uncertainty, discrepancy between model and natural system. Because model uncertainty analysis essentially aims to quantify the joint probability distribution of model parameters and to make inference about this distribution, we believe that the iterative nature of Bayes' Theorem is a logical means to incorporate existing knowledge and update the joint distribution as new information becomes available. The statistical methodology begins with the characterization of parameter uncertainty in the form of probability distributions, then water quality data are used to update the distributions, and yield posterior parameter estimates along with predictive uncertainty bounds. Our illustration is based on a six state variable (nitrate, ammonium, dissolved organic nitrogen, phytoplankton, zooplankton, and bacteria) ecological model developed for gaining insight into the mechanisms that drive plankton dynamics in a coastal embayment; the Gulf of Gera, Island of Lesvos, Greece. The lack of analytical expressions for the posterior parameter distributions was overcome using Markov chain Monte Carlo simulations; a convenient way to obtain representative samples of parameter values. The Bayesian calibration resulted in realistic reproduction of the key temporal patterns of the system, offered insights into the degree of information the data contain about model inputs, and also allowed the quantification of the dependence structure among the parameter estimates. Finally, our study uses two synthetic datasets to examine the ability of the updated model to provide estimates of predictive uncertainty for water quality variables of environmental management interest.

  9. Statistical Significance of Periodicity and Log-Periodicity with Heavy-Tailed Correlated Noise

    NASA Astrophysics Data System (ADS)

    Zhou, Wei-Xing; Sornette, Didier

    We estimate the probability that random noise, of several plausible standard distributions, creates a false alarm that a periodicity (or log-periodicity) is found in a time series. The solution of this problem is already known for independent Gaussian distributed noise. We investigate more general situations with non-Gaussian correlated noises and present synthetic tests on the detectability and statistical significance of periodic components. A periodic component of a time series is usually detected by some sort of Fourier analysis. Here, we use the Lomb periodogram analysis, which is suitable and outperforms Fourier transforms for unevenly sampled time series. We examine the false-alarm probability of the largest spectral peak of the Lomb periodogram in the presence of power-law distributed noises, of short-range and of long-range fractional-Gaussian noises. Increasing heavy-tailness (respectively correlations describing persistence) tends to decrease (respectively increase) the false-alarm probability of finding a large spurious Lomb peak. Increasing anti-persistence tends to decrease the false-alarm probability. We also study the interplay between heavy-tailness and long-range correlations. In order to fully determine if a Lomb peak signals a genuine rather than a spurious periodicity, one should in principle characterize the Lomb peak height, its width and its relations to other peaks in the complete spectrum. As a step towards this full characterization, we construct the joint-distribution of the frequency position (relative to other peaks) and of the height of the highest peak of the power spectrum. We also provide the distributions of the ratio of the highest Lomb peak to the second highest one. Using the insight obtained by the present statistical study, we re-examine previously reported claims of ``log-periodicity'' and find that the credibility for log-periodicity in 2D-freely decaying turbulence is weakened while it is strengthened for fracture, for the ion-signature prior to the Kobe earthquake and for financial markets.

  10. Evaluation of joint probability density function models for turbulent nonpremixed combustion with complex chemistry

    NASA Technical Reports Server (NTRS)

    Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.

    1996-01-01

    Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.

  11. Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers

    NASA Astrophysics Data System (ADS)

    Samiei-Esfahany, Sami; Hanssen, Ramon F.

    2012-01-01

    The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.

  12. Species abundance distribution and population dynamics in a two-community model of neutral ecology

    NASA Astrophysics Data System (ADS)

    Vallade, M.; Houchmandzadeh, B.

    2006-11-01

    Explicit formulas for the steady-state distribution of species in two interconnected communities of arbitrary sizes are derived in the framework of Hubbell’s neutral model of biodiversity. Migrations of seeds from both communities as well as mutations in both of them are taken into account. These results generalize those previously obtained for the “island-continent” model and they allow an analysis of the influence of the ratio of the sizes of the two communities on the dominance/diversity equilibrium. Exact expressions for species abundance distributions are deduced from a master equation for the joint probability distribution of species in the two communities. Moreover, an approximate self-consistent solution is derived. It corresponds to a generalization of previous results and it proves to be accurate over a broad range of parameters. The dynamical correlations between the abundances of a species in both communities are also discussed.

  13. A Stochastic Diffusion Process for the Dirichlet Distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-03-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less

  14. Multivariate flood risk assessment: reinsurance perspective

    NASA Astrophysics Data System (ADS)

    Ghizzoni, Tatiana; Ellenrieder, Tobias

    2013-04-01

    For insurance and re-insurance purposes the knowledge of the spatial characteristics of fluvial flooding is fundamental. The probability of simultaneous flooding at different locations during one event and the associated severity and losses have to be estimated in order to assess premiums and for accumulation control (Probable Maximum Losses calculation). Therefore, the identification of a statistical model able to describe the multivariate joint distribution of flood events in multiple location is necessary. In this context, copulas can be viewed as alternative tools for dealing with multivariate simulations as they allow to formalize dependence structures of random vectors. An application of copula function for flood scenario generation is presented for Australia (Queensland, New South Wales and Victoria) where 100.000 possible flood scenarios covering approximately 15.000 years were simulated.

  15. Factors related to the joint probability of flooding on paired streams

    USGS Publications Warehouse

    Koltun, G.F.; Sherwood, J.M.

    1998-01-01

    The factors related to the joint probabilty of flooding on paired streams were investigated and quantified to provide information to aid in the design of hydraulic structures where the joint probabilty of flooding is an element of the design criteria. Stream pairs were considered to have flooded jointly at the design-year flood threshold (corresponding to the 2-, 10-, 25-, or 50-year instantaneous peak streamflow) if peak streamflows at both streams in the pair were observed or predicted to have equaled or exceeded the threshold on a given calendar day. Daily mean streamflow data were used as a substitute for instantaneous peak streamflow data to determine which flood thresholds were equaled or exceeded on any given day. Instantaneous peak streamflow data, when available, were used preferentially to assess flood-threshold exceedance. Daily mean streamflow data for each stream were paired with concurrent daily mean streamflow data at the other streams. Observed probabilities of joint flooding, determined for the 2-, 10-, 25-, and 50-year flood thresholds, were computed as the ratios of the total number of days when streamflows at both streams concurrently equaled or exceeded their flood thresholds (events) to the total number of days where streamflows at either stream equaled or exceeded its flood threshold (trials). A combination of correlation analyses, graphical analyses, and logistic-regression analyses were used to identify and quantify factors associated with the observed probabilities of joint flooding (event-trial ratios). The analyses indicated that the distance between drainage area centroids, the ratio of the smaller to larger drainage area, the mean drainage area, and the centroid angle adjusted 30 degrees were the basin characteristics most closely associated with the joint probabilty of flooding on paired streams in Ohio. In general, the analyses indicated that the joint probabilty of flooding decreases with an increase in centroid distance and increases with increases in drainage area ratio, mean drainage area, and centroid angle adjusted 30 degrees. Logistic-regression equations were developed, which can be used to estimate the probability that streamflows at two streams jointly equal or exceed the 2-year flood threshold given that the streamflow at one of the two streams equals or exceeds the 2-year flood threshold. The logistic-regression equations are applicable to stream pairs in Ohio (and border areas of adjacent states) that are unregulated, free of significant urban influences, and have characteristics similar to those of the 304 gaged stream pairs used in the logistic-regression analyses. Contingency tables were constructed and analyzed to provide information about the bivariate distribution of floods on paired streams. The contingency tables showed that the percentage of trials in which both streams in the pair concurrently flood at identical recurrence-interval ranges generally increased as centroid distances decreased and was greatest for stream pairs with adjusted centroid angles greater than or equal to 60 degrees and drainage area ratios greater than or equal to 0.01. Also, as centroid distance increased, streamflow at one stream in the pair was more likely to be in a less than 2-year recurrence-interval range when streamflow at the second stream was in a 2-year or greater recurrence-interval range.

  16. Statistics of Optical Coherence Tomography Data From Human Retina

    PubMed Central

    de Juan, Joaquín; Ferrone, Claudia; Giannini, Daniela; Huang, David; Koch, Giorgio; Russo, Valentina; Tan, Ou; Bruni, Carlo

    2010-01-01

    Optical coherence tomography (OCT) has recently become one of the primary methods for noninvasive probing of the human retina. The pseudoimage formed by OCT (the so-called B-scan) varies probabilistically across pixels due to complexities in the measurement technique. Hence, sensitive automatic procedures of diagnosis using OCT may exploit statistical analysis of the spatial distribution of reflectance. In this paper, we perform a statistical study of retinal OCT data. We find that the stretched exponential probability density function can model well the distribution of intensities in OCT pseudoimages. Moreover, we show a small, but significant correlation between neighbor pixels when measuring OCT intensities with pixels of about 5 µm. We then develop a simple joint probability model for the OCT data consistent with known retinal features. This model fits well the stretched exponential distribution of intensities and their spatial correlation. In normal retinas, fit parameters of this model are relatively constant along retinal layers, but varies across layers. However, in retinas with diabetic retinopathy, large spikes of parameter modulation interrupt the constancy within layers, exactly where pathologies are visible. We argue that these results give hope for improvement in statistical pathology-detection methods even when the disease is in its early stages. PMID:20304733

  17. Damage prognosis of adhesively-bonded joints in laminated composite structural components of unmanned aerial vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrar, Charles R; Gobbato, Maurizio; Conte, Joel

    2009-01-01

    The extensive use of lightweight advanced composite materials in unmanned aerial vehicles (UAVs) drastically increases the sensitivity to both fatigue- and impact-induced damage of their critical structural components (e.g., wings and tail stabilizers) during service life. The spar-to-skin adhesive joints are considered one of the most fatigue sensitive subcomponents of a lightweight UAV composite wing with damage progressively evolving from the wing root. This paper presents a comprehensive probabilistic methodology for predicting the remaining service life of adhesively-bonded joints in laminated composite structural components of UAVs. Non-destructive evaluation techniques and Bayesian inference are used to (i) assess the current statemore » of damage of the system and, (ii) update the probability distribution of the damage extent at various locations. A probabilistic model for future loads and a mechanics-based damage model are then used to stochastically propagate damage through the joint. Combined local (e.g., exceedance of a critical damage size) and global (e.g.. flutter instability) failure criteria are finally used to compute the probability of component failure at future times. The applicability and the partial validation of the proposed methodology are then briefly discussed by analyzing the debonding propagation, along a pre-defined adhesive interface, in a simply supported laminated composite beam with solid rectangular cross section, subjected to a concentrated load applied at mid-span. A specially developed Eliler-Bernoulli beam finite element with interlaminar slip along the damageable interface is used in combination with a cohesive zone model to study the fatigue-induced degradation in the adhesive material. The preliminary numerical results presented are promising for the future validation of the methodology.« less

  18. Can a quantum state over time resemble a quantum state at a single time?

    NASA Astrophysics Data System (ADS)

    Horsman, Dominic; Heunen, Chris; Pusey, Matthew F.; Barrett, Jonathan; Spekkens, Robert W.

    2017-09-01

    The standard formalism of quantum theory treats space and time in fundamentally different ways. In particular, a composite system at a given time is represented by a joint state, but the formalism does not prescribe a joint state for a composite of systems at different times. If there were a way of defining such a joint state, this would potentially permit a more even-handed treatment of space and time, and would strengthen the existing analogy between quantum states and classical probability distributions. Under the assumption that the joint state over time is an operator on the tensor product of single-time Hilbert spaces, we analyse various proposals for such a joint state, including one due to Leifer and Spekkens, one due to Fitzsimons, Jones and Vedral, and another based on discrete Wigner functions. Finding various problems with each, we identify five criteria for a quantum joint state over time to satisfy if it is to play a role similar to the standard joint state for a composite system: that it is a Hermitian operator on the tensor product of the single-time Hilbert spaces; that it represents probabilistic mixing appropriately; that it has the appropriate classical limit; that it has the appropriate single-time marginals; that composing over multiple time steps is associative. We show that no construction satisfies all these requirements. If Hermiticity is dropped, then there is an essentially unique construction that satisfies the remaining four criteria.

  19. Experimental Test of Heisenberg's Measurement Uncertainty Relation Based on Statistical Distances

    NASA Astrophysics Data System (ADS)

    Ma, Wenchao; Ma, Zhihao; Wang, Hengyan; Chen, Zhihua; Liu, Ying; Kong, Fei; Li, Zhaokai; Peng, Xinhua; Shi, Mingjun; Shi, Fazhan; Fei, Shao-Ming; Du, Jiangfeng

    2016-04-01

    Incompatible observables can be approximated by compatible observables in joint measurement or measured sequentially, with constrained accuracy as implied by Heisenberg's original formulation of the uncertainty principle. Recently, Busch, Lahti, and Werner proposed inaccuracy trade-off relations based on statistical distances between probability distributions of measurement outcomes [P. Busch et al., Phys. Rev. Lett. 111, 160405 (2013); P. Busch et al., Phys. Rev. A 89, 012129 (2014)]. Here we reformulate their theoretical framework, derive an improved relation for qubit measurement, and perform an experimental test on a spin system. The relation reveals that the worst-case inaccuracy is tightly bounded from below by the incompatibility of target observables, and is verified by the experiment employing joint measurement in which two compatible observables designed to approximate two incompatible observables on one qubit are measured simultaneously.

  20. Experimental Test of Heisenberg's Measurement Uncertainty Relation Based on Statistical Distances.

    PubMed

    Ma, Wenchao; Ma, Zhihao; Wang, Hengyan; Chen, Zhihua; Liu, Ying; Kong, Fei; Li, Zhaokai; Peng, Xinhua; Shi, Mingjun; Shi, Fazhan; Fei, Shao-Ming; Du, Jiangfeng

    2016-04-22

    Incompatible observables can be approximated by compatible observables in joint measurement or measured sequentially, with constrained accuracy as implied by Heisenberg's original formulation of the uncertainty principle. Recently, Busch, Lahti, and Werner proposed inaccuracy trade-off relations based on statistical distances between probability distributions of measurement outcomes [P. Busch et al., Phys. Rev. Lett. 111, 160405 (2013); P. Busch et al., Phys. Rev. A 89, 012129 (2014)]. Here we reformulate their theoretical framework, derive an improved relation for qubit measurement, and perform an experimental test on a spin system. The relation reveals that the worst-case inaccuracy is tightly bounded from below by the incompatibility of target observables, and is verified by the experiment employing joint measurement in which two compatible observables designed to approximate two incompatible observables on one qubit are measured simultaneously.

  1. Optimal Universal Uncertainty Relations

    PubMed Central

    Li, Tao; Xiao, Yunlong; Ma, Teng; Fei, Shao-Ming; Jing, Naihuan; Li-Jost, Xianqing; Wang, Zhi-Xi

    2016-01-01

    We study universal uncertainty relations and present a method called joint probability distribution diagram to improve the majorization bounds constructed independently in [Phys. Rev. Lett. 111, 230401 (2013)] and [J. Phys. A. 46, 272002 (2013)]. The results give rise to state independent uncertainty relations satisfied by any nonnegative Schur-concave functions. On the other hand, a remarkable recent result of entropic uncertainty relation is the direct-sum majorization relation. In this paper, we illustrate our bounds by showing how they provide a complement to that in [Phys. Rev. A. 89, 052115 (2014)]. PMID:27775010

  2. Subchondral bone density distribution of the talus in clinically normal Labrador Retrievers.

    PubMed

    Dingemanse, W; Müller-Gerbl, M; Jonkers, I; Vander Sloten, J; van Bree, H; Gielen, I

    2016-03-15

    Bones continually adapt their morphology to their load bearing function. At the level of the subchondral bone, the density distribution is highly correlated with the loading distribution of the joint. Therefore, subchondral bone density distribution can be used to study joint biomechanics non-invasively. In addition physiological and pathological joint loading is an important aspect of orthopaedic disease, and research focusing on joint biomechanics will benefit veterinary orthopaedics. This study was conducted to evaluate density distribution in the subchondral bone of the canine talus, as a parameter reflecting the long-term joint loading in the tarsocrural joint. Two main density maxima were found, one proximally on the medial trochlear ridge and one distally on the lateral trochlear ridge. All joints showed very similar density distribution patterns and no significant differences were found in the localisation of the density maxima between left and right limbs and between dogs. Based on the density distribution the lateral trochlear ridge is most likely subjected to highest loads within the tarsocrural joint. The joint loading distribution is very similar between dogs of the same breed. In addition, the joint loading distribution supports previous suggestions of the important role of biomechanics in the development of OC lesions in the tarsus. Important benefits of computed tomographic osteoabsorptiometry (CTOAM), i.e. the possibility of in vivo imaging and temporal evaluation, make this technique a valuable addition to the field of veterinary orthopaedic research.

  3. Applying the Hájek Approach in Formula-Based Variance Estimation. Research Report. ETS RR-17-24

    ERIC Educational Resources Information Center

    Qian, Jiahe

    2017-01-01

    The variance formula derived for a two-stage sampling design without replacement employs the joint inclusion probabilities in the first-stage selection of clusters. One of the difficulties encountered in data analysis is the lack of information about such joint inclusion probabilities. One way to solve this issue is by applying Hájek's…

  4. Naive Probability: Model-Based Estimates of Unique Events.

    PubMed

    Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N

    2015-08-01

    We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. © 2014 Cognitive Science Society, Inc.

  5. Generalized Cross Entropy Method for estimating joint distribution from incomplete information

    NASA Astrophysics Data System (ADS)

    Xu, Hai-Yan; Kuo, Shyh-Hao; Li, Guoqi; Legara, Erika Fille T.; Zhao, Daxuan; Monterola, Christopher P.

    2016-07-01

    Obtaining a full joint distribution from individual marginal distributions with incomplete information is a non-trivial task that continues to challenge researchers from various domains including economics, demography, and statistics. In this work, we develop a new methodology referred to as ;Generalized Cross Entropy Method; (GCEM) that is aimed at addressing the issue. The objective function is proposed to be a weighted sum of divergences between joint distributions and various references. We show that the solution of the GCEM is unique and global optimal. Furthermore, we illustrate the applicability and validity of the method by utilizing it to recover the joint distribution of a household profile of a given administrative region. In particular, we estimate the joint distribution of the household size, household dwelling type, and household home ownership in Singapore. Results show a high-accuracy estimation of the full joint distribution of the household profile under study. Finally, the impact of constraints and weight on the estimation of joint distribution is explored.

  6. Generation of degenerate, factorizable, pulsed squeezed light at telecom wavelengths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerrits, Thomas; Stevens, Martin; Baek, Burm

    We characterize a periodically poled KTP crystal that produces an entangled, two-mode, squeezed state with orthogonal polarizations, nearly identical, factorizable frequency modes, and few photons in unwanted frequency modes. We focus the pump beam to create a nearly circular joint spectral probability distribution between the two modes. After disentangling the two modes, we observe Hong-Ou-Mandel interference with a raw (background corrected) visibility of 86% (95%) when an 8.6 nm bandwidth spectral filter is applied. We measure second order photon correlations of the entangled and disentangled squeezed states with both superconducting nanowire single-photon detectors and photon-number-resolving transition-edge sensors. Both methods agreemore » and verify that the detected modes contain the desired photon number distributions.« less

  7. Statistical Characterization and Classification of Edge-Localized Plasma Instabilities

    NASA Astrophysics Data System (ADS)

    Webster, A. J.; Dendy, R. O.

    2013-04-01

    The statistics of edge-localized plasma instabilities (ELMs) in toroidal magnetically confined fusion plasmas are considered. From first principles, standard experimentally motivated assumptions are shown to determine a specific probability distribution for the waiting times between ELMs: the Weibull distribution. This is confirmed empirically by a statistically rigorous comparison with a large data set from the Joint European Torus. The successful characterization of ELM waiting times enables future work to progress in various ways. Here we present a quantitative classification of ELM types, complementary to phenomenological approaches. It also informs us about the nature of ELM processes, such as whether they are random or deterministic. The methods are extremely general and can be applied to numerous other quasiperiodic intermittent phenomena.

  8. Bayesian multiple-source localization in an uncertain ocean environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2011-06-01

    This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America

  9. Wave-height hazard analysis in Eastern Coast of Spain - Bayesian approach using generalized Pareto distribution

    NASA Astrophysics Data System (ADS)

    Egozcue, J. J.; Pawlowsky-Glahn, V.; Ortego, M. I.

    2005-03-01

    Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0.

  10. Statistical Analysis of Stress Signals from Bridge Monitoring by FBG System

    PubMed Central

    Ye, Xiao-Wei; Xi, Pei-Sen

    2018-01-01

    In this paper, a fiber Bragg grating (FBG)-based stress monitoring system instrumented on an orthotropic steel deck arch bridge is demonstrated. The FBG sensors are installed at two types of critical fatigue-prone welded joints to measure the strain and temperature signals. A total of 64 FBG sensors are deployed around the rib-to-deck and rib-to-diagram areas at the mid-span and quarter-span of the investigated orthotropic steel bridge. The local stress behaviors caused by the highway loading and temperature effect during the construction and operation periods are presented with the aid of a wavelet multi-resolution analysis approach. In addition, the multi-modal characteristic of the rainflow counted stress spectrum is modeled by the method of finite mixture distribution together with a genetic algorithm (GA)-based parameter estimation approach. The optimal probability distribution of the stress spectrum is determined by use of Bayesian information criterion (BIC). Furthermore, the hot spot stress of the welded joint is calculated by an extrapolation method recommended in the specification of International Institute of Welding (IIW). The stochastic characteristic of stress concentration factor (SCF) of the concerned welded joint is addressed. The proposed FBG-based stress monitoring system and probabilistic stress evaluation methods can provide an effective tool for structural monitoring and condition assessment of orthotropic steel bridges. PMID:29414850

  11. Healthy Eating and Leisure-Time Activity: Cross-Sectional Analysis of that Role of Work Environments in the U.S.

    PubMed

    Williams, Jessica A R; Arcaya, Mariana; Subramanian, S V

    2017-11-01

    The aim of this study was to evaluate relationships between work context and two health behaviors, healthy eating and leisure-time physical activity (LTPA), in U.S. adults. Using data from the 2010 National Health Interview Survey (NHIS) and Occupational Information Network (N = 14,863), we estimated a regression model to predict the marginal and joint probabilities of healthy eating and adhering to recommended exercise guidelines. Decision-making freedom was positively related to healthy eating and both behaviors jointly. Higher physical load was associated with a lower marginal probability of LTPA, healthy eating, and both behaviors jointly. Smoke and vapor exposures were negatively related to healthy eating and both behaviors. Chemical exposure was positively related to LTPA and both behaviors. Characteristics associated with marginal probabilities were not always predictive of joint outcomes. On the basis of nationwide occupation-specific evidence, workplace characteristics are important for healthy eating and LTPA.

  12. Development and application of a probability distribution retrieval scheme to the remote sensing of clouds and precipitation

    NASA Astrophysics Data System (ADS)

    McKague, Darren Shawn

    2001-12-01

    The statistical properties of clouds and precipitation on a global scale are important to our understanding of climate. Inversion methods exist to retrieve the needed cloud and precipitation properties from satellite data pixel-by-pixel that can then be summarized over large data sets to obtain the desired statistics. These methods can be quite computationally expensive, and typically don't provide errors on the statistics. A new method is developed to directly retrieve probability distributions of parameters from the distribution of measured radiances. The method also provides estimates of the errors on the retrieved distributions. The method can retrieve joint distributions of parameters that allows for the study of the connection between parameters. A forward radiative transfer model creates a mapping from retrieval parameter space to radiance space. A Monte Carlo procedure uses the mapping to transform probability density from the observed radiance histogram to a two- dimensional retrieval property probability distribution function (PDF). An estimate of the uncertainty in the retrieved PDF is calculated from random realizations of the radiance to retrieval parameter PDF transformation given the uncertainty of the observed radiances, the radiance PDF, the forward radiative transfer, the finite number of prior state vectors, and the non-unique mapping to retrieval parameter space. The retrieval method is also applied to the remote sensing of precipitation from SSM/I microwave data. A method of stochastically generating hydrometeor fields based on the fields from a numerical cloud model is used to create the precipitation parameter radiance space transformation. The impact of vertical and horizontal variability within the hydrometeor fields has a significant impact on algorithm performance. Beamfilling factors are computed from the simulated hydrometeor fields. The beamfilling factors vary quite a bit depending upon the horizontal structure of the rain. The algorithm is applied to SSM/I images from the eastern tropical Pacific and is compared to PDFs of rain rate computed using pixel-by-pixel retrievals from Wilheit and from Liu and Curry. Differences exist between the three methods, but good general agreement is seen between the PDF retrieval algorithm and the algorithm of Liu and Curry. (Abstract shortened by UMI.)

  13. Risk of false decision on conformity of a multicomponent material when test results of the components' content are correlated.

    PubMed

    Kuselman, Ilya; Pennecchi, Francesca R; da Silva, Ricardo J N B; Hibbert, D Brynn

    2017-11-01

    The probability of a false decision on conformity of a multicomponent material due to measurement uncertainty is discussed when test results are correlated. Specification limits of the components' content of such a material generate a multivariate specification interval/domain. When true values of components' content and corresponding test results are modelled by multivariate distributions (e.g. by multivariate normal distributions), a total global risk of a false decision on the material conformity can be evaluated based on calculation of integrals of their joint probability density function. No transformation of the raw data is required for that. A total specific risk can be evaluated as the joint posterior cumulative function of true values of a specific batch or lot lying outside the multivariate specification domain, when the vector of test results, obtained for the lot, is inside this domain. It was shown, using a case study of four components under control in a drug, that the correlation influence on the risk value is not easily predictable. To assess this influence, the evaluated total risk values were compared with those calculated for independent test results and also with those assuming much stronger correlation than that observed. While the observed statistically significant correlation did not lead to a visible difference in the total risk values in comparison to the independent test results, the stronger correlation among the variables caused either the total risk decreasing or its increasing, depending on the actual values of the test results. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Quantum Common Causes and Quantum Causal Models

    NASA Astrophysics Data System (ADS)

    Allen, John-Mark A.; Barrett, Jonathan; Horsman, Dominic C.; Lee, Ciarán M.; Spekkens, Robert W.

    2017-07-01

    Reichenbach's principle asserts that if two observed variables are found to be correlated, then there should be a causal explanation of these correlations. Furthermore, if the explanation is in terms of a common cause, then the conditional probability distribution over the variables given the complete common cause should factorize. The principle is generalized by the formalism of causal models, in which the causal relationships among variables constrain the form of their joint probability distribution. In the quantum case, however, the observed correlations in Bell experiments cannot be explained in the manner Reichenbach's principle would seem to demand. Motivated by this, we introduce a quantum counterpart to the principle. We demonstrate that under the assumption that quantum dynamics is fundamentally unitary, if a quantum channel with input A and outputs B and C is compatible with A being a complete common cause of B and C , then it must factorize in a particular way. Finally, we show how to generalize our quantum version of Reichenbach's principle to a formalism for quantum causal models and provide examples of how the formalism works.

  15. The Combined Use of Correlative and Mechanistic Species Distribution Models Benefits Low Conservation Status Species.

    PubMed

    Rougier, Thibaud; Lassalle, Géraldine; Drouineau, Hilaire; Dumoulin, Nicolas; Faure, Thierry; Deffuant, Guillaume; Rochard, Eric; Lambert, Patrick

    2015-01-01

    Species can respond to climate change by tracking appropriate environmental conditions in space, resulting in a range shift. Species Distribution Models (SDMs) can help forecast such range shift responses. For few species, both correlative and mechanistic SDMs were built, but allis shad (Alosa alosa), an endangered anadromous fish species, is one of them. The main purpose of this study was to provide a framework for joint analyses of correlative and mechanistic SDMs projections in order to strengthen conservation measures for species of conservation concern. Guidelines for joint representation and subsequent interpretation of models outputs were defined and applied. The present joint analysis was based on the novel mechanistic model GR3D (Global Repositioning Dynamics of Diadromous fish Distribution) which was parameterized on allis shad and then used to predict its future distribution along the European Atlantic coast under different climate change scenarios (RCP 4.5 and RCP 8.5). We then used a correlative SDM for this species to forecast its distribution across the same geographic area and under the same climate change scenarios. First, projections from correlative and mechanistic models provided congruent trends in probability of habitat suitability and population dynamics. This agreement was preferentially interpreted as referring to the species vulnerability to climate change. Climate change could not be accordingly listed as a major threat for allis shad. The congruence in predicted range limits between SDMs projections was the next point of interest. The difference, when noticed, required to deepen our understanding of the niche modelled by each approach. In this respect, the relative position of the northern range limit between the two methods strongly suggested here that a key biological process related to intraspecific variability was potentially lacking in the mechanistic SDM. Based on our knowledge, we hypothesized that local adaptations to cold temperatures deserved more attention in terms of modelling, but further in conservation planning as well.

  16. The Combined Use of Correlative and Mechanistic Species Distribution Models Benefits Low Conservation Status Species

    PubMed Central

    Rougier, Thibaud; Lassalle, Géraldine; Drouineau, Hilaire; Dumoulin, Nicolas; Faure, Thierry; Deffuant, Guillaume; Rochard, Eric; Lambert, Patrick

    2015-01-01

    Species can respond to climate change by tracking appropriate environmental conditions in space, resulting in a range shift. Species Distribution Models (SDMs) can help forecast such range shift responses. For few species, both correlative and mechanistic SDMs were built, but allis shad (Alosa alosa), an endangered anadromous fish species, is one of them. The main purpose of this study was to provide a framework for joint analyses of correlative and mechanistic SDMs projections in order to strengthen conservation measures for species of conservation concern. Guidelines for joint representation and subsequent interpretation of models outputs were defined and applied. The present joint analysis was based on the novel mechanistic model GR3D (Global Repositioning Dynamics of Diadromous fish Distribution) which was parameterized on allis shad and then used to predict its future distribution along the European Atlantic coast under different climate change scenarios (RCP 4.5 and RCP 8.5). We then used a correlative SDM for this species to forecast its distribution across the same geographic area and under the same climate change scenarios. First, projections from correlative and mechanistic models provided congruent trends in probability of habitat suitability and population dynamics. This agreement was preferentially interpreted as referring to the species vulnerability to climate change. Climate change could not be accordingly listed as a major threat for allis shad. The congruence in predicted range limits between SDMs projections was the next point of interest. The difference, when noticed, required to deepen our understanding of the niche modelled by each approach. In this respect, the relative position of the northern range limit between the two methods strongly suggested here that a key biological process related to intraspecific variability was potentially lacking in the mechanistic SDM. Based on our knowledge, we hypothesized that local adaptations to cold temperatures deserved more attention in terms of modelling, but further in conservation planning as well. PMID:26426280

  17. N -tag probability law of the symmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Poncet, Alexis; Bénichou, Olivier; Démery, Vincent; Oshanin, Gleb

    2018-06-01

    The symmetric exclusion process (SEP), in which particles hop symmetrically on a discrete line with hard-core constraints, is a paradigmatic model of subdiffusion in confined systems. This anomalous behavior is a direct consequence of strong spatial correlations induced by the requirement that the particles cannot overtake each other. Even if this fact has been recognized qualitatively for a long time, up to now there has been no full quantitative determination of these correlations. Here we study the joint probability distribution of an arbitrary number of tagged particles in the SEP. We determine analytically its large-time limit for an arbitrary density of particles, and its full dynamics in the high-density limit. In this limit, we obtain the time-dependent large deviation function of the problem and unveil a universal scaling form shared by the cumulants.

  18. NASA Instrument Cost/Schedule Model

    NASA Technical Reports Server (NTRS)

    Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George

    2011-01-01

    NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.

  19. Bivariate at-site frequency analysis of simulated flood peak-volume data using copulas

    NASA Astrophysics Data System (ADS)

    Gaál, Ladislav; Viglione, Alberto; Szolgay, Ján.; Blöschl, Günter; Bacigál, Tomáå.¡

    2010-05-01

    In frequency analysis of joint hydro-climatological extremes (flood peaks and volumes, low flows and durations, etc.), usually, bivariate distribution functions are fitted to the observed data in order to estimate the probability of their occurrence. Bivariate models, however, have a number of limitations; therefore, in the recent past, dependence models based on copulas have gained increased attention to represent the joint probabilities of hydrological characteristics. Regardless of whether standard or copula based bivariate frequency analysis is carried out, one is generally interested in the extremes corresponding to low probabilities of the fitted joint cumulative distribution functions (CDFs). However, usually there is not enough flood data in the right tail of the empirical CDFs to derive reliable statistical inferences on the behaviour of the extremes. Therefore, different techniques are used to extend the amount of information for the statistical inference, i.e., temporal extension methods that allow for making use of historical data or spatial extension methods such as regional approaches. In this study, a different approach was adopted which uses simulated flood data by rainfall-runoff modelling, to increase the amount of data in the right tail of the CDFs. In order to generate artificial runoff data (i.e. to simulate flood records of lengths of approximately 106 years), a two-step procedure was used. (i) First, the stochastic rainfall generator proposed by Sivapalan et al. (2005) was modified for our purpose. This model is based on the assumption of discrete rainfall events whose arrival times, durations, mean rainfall intensity and the within-storm intensity patterns are all random, and can be described by specified distributions. The mean storm rainfall intensity is disaggregated further to hourly intensity patterns. (ii) Secondly, the simulated rainfall data entered a semi-distributed conceptual rainfall-runoff model that consisted of a snow routine, a soil moisture routine and a flow routing routine (Parajka et al., 2007). The applicability of the proposed method was demonstrated on selected sites in Slovakia and Austria. The pairs of simulated flood volumes and flood peaks were analysed in terms of their dependence structure and different families of copulas (Archimedean, extreme value, Gumbel-Hougaard, etc.) were fitted to the observed and simulated data. The question to what extent measured data can be used to find the right copula was discussed. The study is supported by the Austrian Academy of Sciences and the Austrian-Slovak Co-operation in Science and Education "Aktion". Parajka, J., Merz, R., Blöschl, G., 2007: Uncertainty and multiple objective calibration in regional water balance modeling - Case study in 320 Austrian catchments. Hydrological Processes, 21, 435-446. Sivapalan, M., Blöschl, G., Merz, R., Gutknecht, D., 2005: Linking flood frequency to long-term water balance: incorporating effects of seasonality. Water Resources Research, 41, W06012, doi:10.1029/2004WR003439.

  20. Joint genome-wide prediction in several populations accounting for randomness of genotypes: A hierarchical Bayes approach. I: Multivariate Gaussian priors for marker effects and derivation of the joint probability mass function of genotypes.

    PubMed

    Martínez, Carlos Alberto; Khare, Kshitij; Banerjee, Arunava; Elzo, Mauricio A

    2017-03-21

    It is important to consider heterogeneity of marker effects and allelic frequencies in across population genome-wide prediction studies. Moreover, all regression models used in genome-wide prediction overlook randomness of genotypes. In this study, a family of hierarchical Bayesian models to perform across population genome-wide prediction modeling genotypes as random variables and allowing population-specific effects for each marker was developed. Models shared a common structure and differed in the priors used and the assumption about residual variances (homogeneous or heterogeneous). Randomness of genotypes was accounted for by deriving the joint probability mass function of marker genotypes conditional on allelic frequencies and pedigree information. As a consequence, these models incorporated kinship and genotypic information that not only permitted to account for heterogeneity of allelic frequencies, but also to include individuals with missing genotypes at some or all loci without the need for previous imputation. This was possible because the non-observed fraction of the design matrix was treated as an unknown model parameter. For each model, a simpler version ignoring population structure, but still accounting for randomness of genotypes was proposed. Implementation of these models and computation of some criteria for model comparison were illustrated using two simulated datasets. Theoretical and computational issues along with possible applications, extensions and refinements were discussed. Some features of the models developed in this study make them promising for genome-wide prediction, the use of information contained in the probability distribution of genotypes is perhaps the most appealing. Further studies to assess the performance of the models proposed here and also to compare them with conventional models used in genome-wide prediction are needed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1995-01-01

    For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.

  2. A New Multivariate Approach in Generating Ensemble Meteorological Forcings for Hydrological Forecasting

    NASA Astrophysics Data System (ADS)

    Khajehei, Sepideh; Moradkhani, Hamid

    2015-04-01

    Producing reliable and accurate hydrologic ensemble forecasts are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model structure, and model parameters. Producing reliable and skillful precipitation ensemble forecasts is one approach to reduce the total uncertainty in hydrological applications. Currently, National Weather Prediction (NWP) models are developing ensemble forecasts for various temporal ranges. It is proven that raw products from NWP models are biased in mean and spread. Given the above state, there is a need for methods that are able to generate reliable ensemble forecasts for hydrological applications. One of the common techniques is to apply statistical procedures in order to generate ensemble forecast from NWP-generated single-value forecasts. The procedure is based on the bivariate probability distribution between the observation and single-value precipitation forecast. However, one of the assumptions of the current method is fitting Gaussian distribution to the marginal distributions of observed and modeled climate variable. Here, we have described and evaluated a Bayesian approach based on Copula functions to develop an ensemble precipitation forecast from the conditional distribution of single-value precipitation forecasts. Copula functions are known as the multivariate joint distribution of univariate marginal distributions, which are presented as an alternative procedure in capturing the uncertainties related to meteorological forcing. Copulas are capable of modeling the joint distribution of two variables with any level of correlation and dependency. This study is conducted over a sub-basin in the Columbia River Basin in USA using the monthly precipitation forecasts from Climate Forecast System (CFS) with 0.5x0.5 Deg. spatial resolution to reproduce the observations. The verification is conducted on a different period and the superiority of the procedure is compared with Ensemble Pre-Processor approach currently used by National Weather Service River Forecast Centers in USA.

  3. Stochastic mechanics of reciprocal diffusions

    NASA Astrophysics Data System (ADS)

    Levy, Bernard C.; Krener, Arthur J.

    1996-02-01

    The dynamics and kinematics of reciprocal diffusions were examined in a previous paper [J. Math. Phys. 34, 1846 (1993)], where it was shown that reciprocal diffusions admit a chain of conservation laws, which close after the first two laws for two disjoint subclasses of reciprocal diffusions, the Markov and quantum diffusions. For the case of quantum diffusions, the conservation laws are equivalent to Schrödinger's equation. The Markov diffusions were employed by Schrödinger [Sitzungsber. Preuss. Akad. Wiss. Phys. Math Kl. 144 (1931); Ann. Inst. H. Poincaré 2, 269 (1932)], Nelson [Dynamical Theories of Brownian Motion (Princeton University, Princeton, NJ, 1967); Quantum Fluctuations (Princeton University, Princeton, NJ, 1985)], and other researchers to develop stochastic formulations of quantum mechanics, called stochastic mechanics. We propose here an alternative version of stochastic mechanics based on quantum diffusions. A procedure is presented for constructing the quantum diffusion associated to a given wave function. It is shown that quantum diffusions satisfy the uncertainty principle, and have a locality property, whereby given two dynamically uncoupled but statistically correlated particles, the marginal statistics of each particle depend only on the local fields to which the particle is subjected. However, like Wigner's joint probability distribution for the position and momentum of a particle, the finite joint probability densities of quantum diffusions may take negative values.

  4. Large-scale-system effectiveness analysis. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, A.D.; Ayoub, A.K.; Foster, J.W.

    1979-11-01

    Objective of the research project has been the investigation and development of methods for calculating system reliability indices which have absolute, and measurable, significance to consumers. Such indices are a necessary prerequisite to any scheme for system optimization which includes the economic consequences of consumer service interruptions. A further area of investigation has been joint consideration of generation and transmission in reliability studies. Methods for finding or estimating the probability distributions of some measures of reliability performance have been developed. The application of modern Monte Carlo simulation methods to compute reliability indices in generating systems has been studied.

  5. Non-locality: A defence of widespread beliefs

    NASA Astrophysics Data System (ADS)

    Laudisa, Federico

    It has been argued, on the basis of an equivalence between the existence of a joint probability distribution for incompatible observables and the satisfaction of the Bell inequalities, that these inequalities are irrelevant to the issue of (non)-locality; and that this issue arises only if we adhere to a notion of objectivity in the description of physical systems that is not justified in quantum mechanics. These arguments are discussed in the orthodox and in the unsharp approach to quantum mechanics, and found defective: the Bell inequalities turn out to be relevant both in the orthodox and in the unsharp approach.

  6. Structural and mechanical properties of cardiolipin lipid bilayers determined using neutron spin echo, small angle neutron and X-ray scattering, and molecular dynamics simulations

    DOE PAGES

    Pan, Jianjun; Cheng, Xiaolin; Sharp, Melissa; ...

    2014-10-29

    We report that the detailed structural and mechanical properties of a tetraoleoyl cardiolipin (TOCL) bilayer were determined using neutron spin echo (NSE) spectroscopy, small angle neutron and X-ray scattering (SANS and SAXS, respectively), and molecular dynamics (MD) simulations. We used MD simulations to develop a scattering density profile (SDP) model, which was then utilized to jointly refine SANS and SAXS data. In addition to commonly reported lipid bilayer structural parameters, component distributions were obtained, including the volume probability, electron density and neutron scattering length density.

  7. Improving Photometric Redshifts for Hyper Suprime-Cam

    NASA Astrophysics Data System (ADS)

    Speagle, Josh S.; Leauthaud, Alexie; Eisenstein, Daniel; Bundy, Kevin; Capak, Peter L.; Leistedt, Boris; Masters, Daniel C.; Mortlock, Daniel; Peiris, Hiranya; HSC Photo-z Team; HSC Weak Lensing Team

    2017-01-01

    Deriving accurate photometric redshift (photo-z) probability distribution functions (PDFs) are crucial science components for current and upcoming large-scale surveys. We outline how rigorous Bayesian inference and machine learning can be combined to quickly derive joint photo-z PDFs to individual galaxies and their parent populations. Using the first 170 deg^2 of data from the ongoing Hyper Suprime-Cam survey, we demonstrate our method is able to generate accurate predictions and reliable credible intervals over ~370k high-quality redshifts. We then use galaxy-galaxy lensing to empirically validate our predicted photo-z's over ~14M objects, finding a robust signal.

  8. Full statistical mode reconstruction of a light field via a photon-number-resolved measurement

    NASA Astrophysics Data System (ADS)

    Burenkov, I. A.; Sharma, A. K.; Gerrits, T.; Harder, G.; Bartley, T. J.; Silberhorn, C.; Goldschmidt, E. A.; Polyakov, S. V.

    2017-05-01

    We present a method to reconstruct the complete statistical mode structure and optical losses of multimode conjugated optical fields using an experimentally measured joint photon-number probability distribution. We demonstrate that this method evaluates classical and nonclassical properties using a single measurement technique and is well suited for quantum mesoscopic state characterization. We obtain a nearly perfect reconstruction of a field comprised of up to ten modes based on a minimal set of assumptions. To show the utility of this method, we use it to reconstruct the mode structure of an unknown bright parametric down-conversion source.

  9. Optimal Power Allocation Strategy in a Joint Bistatic Radar and Communication System Based on Low Probability of Intercept

    PubMed Central

    Wang, Fei; Salous, Sana; Zhou, Jianjiang

    2017-01-01

    In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme. PMID:29186850

  10. Optimal Power Allocation Strategy in a Joint Bistatic Radar and Communication System Based on Low Probability of Intercept.

    PubMed

    Shi, Chenguang; Wang, Fei; Salous, Sana; Zhou, Jianjiang

    2017-11-25

    In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme.

  11. A Bayesian inversion for slip distribution of 1 Apr 2007 Mw8.1 Solomon Islands Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, T.; Luo, H.

    2013-12-01

    On 1 Apr 2007 the megathrust Mw8.1 Solomon Islands earthquake occurred in the southeast pacific along the New Britain subduction zone. 102 vertical displacement measurements over the southeastern end of the rupture zone from two field surveys after this event provide a unique constraint for slip distribution inversion. In conventional inversion method (such as bounded variable least squares) the smoothing parameter that determines the relative weight placed on fitting the data versus smoothing the slip distribution is often subjectively selected at the bend of the trade-off curve. Here a fully probabilistic inversion method[Fukuda,2008] is applied to estimate distributed slip and smoothing parameter objectively. The joint posterior probability density function of distributed slip and the smoothing parameter is formulated under a Bayesian framework and sampled with Markov chain Monte Carlo method. We estimate the spatial distribution of dip slip associated with the 1 Apr 2007 Solomon Islands earthquake with this method. Early results show a shallower dip angle than previous study and highly variable dip slip both along-strike and down-dip.

  12. The Effect of Velocity Correlation on the Spatial Evolution of Breakthrough Curves in Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Massoudieh, A.; Dentz, M.; Le Borgne, T.

    2017-12-01

    In heterogeneous media, the velocity distribution and the spatial correlation structure of velocity for solute particles determine the breakthrough curves and how they evolve as one moves away from the solute source. The ability to predict such evolution can help relating the spatio-statistical hydraulic properties of the media to the transport behavior and travel time distributions. While commonly used non-local transport models such as anomalous dispersion and classical continuous time random walk (CTRW) can reproduce breakthrough curve successfully by adjusting the model parameter values, they lack the ability to relate model parameters to the spatio-statistical properties of the media. This in turns limits the transferability of these models. In the research to be presented, we express concentration or flux of solutes as a distribution over their velocity. We then derive an integrodifferential equation that governs the evolution of the particle distribution over velocity at given times and locations for a particle ensemble, based on a presumed velocity correlation structure and an ergodic cross-sectional velocity distribution. This way, the spatial evolution of breakthrough curves away from the source is predicted based on cross-sectional velocity distribution and the connectivity, which is expressed by the velocity transition probability density. The transition probability is specified via a copula function that can help construct a joint distribution with a given correlation and given marginal velocities. Using this approach, we analyze the breakthrough curves depending on the velocity distribution and correlation properties. The model shows how the solute transport behavior evolves from ballistic transport at small spatial scales to Fickian dispersion at large length scales relative to the velocity correlation length.

  13. GENERAL A Hierarchy of Compatibility and Comeasurability Levels in Quantum Logics with Unique Conditional Probabilities

    NASA Astrophysics Data System (ADS)

    Gerd, Niestegge

    2010-12-01

    In the quantum mechanical Hilbert space formalism, the probabilistic interpretation is a later ad-hoc add-on, more or less enforced by the experimental evidence, but not motivated by the mathematical model itself. A model involving a clear probabilistic interpretation from the very beginning is provided by the quantum logics with unique conditional probabilities. It includes the projection lattices in von Neumann algebras and here probability conditionalization becomes identical with the state transition of the Lüders-von Neumann measurement process. This motivates the definition of a hierarchy of five compatibility and comeasurability levels in the abstract setting of the quantum logics with unique conditional probabilities. Their meanings are: the absence of quantum interference or influence, the existence of a joint distribution, simultaneous measurability, and the independence of the final state after two successive measurements from the sequential order of these two measurements. A further level means that two elements of the quantum logic (events) belong to the same Boolean subalgebra. In the general case, the five compatibility and comeasurability levels appear to differ, but they all coincide in the common Hilbert space formalism of quantum mechanics, in von Neumann algebras, and in some other cases.

  14. Individuality and universality in the growth-division laws of single E. coli cells

    NASA Astrophysics Data System (ADS)

    Kennard, Andrew S.; Osella, Matteo; Javer, Avelino; Grilli, Jacopo; Nghe, Philippe; Tans, Sander J.; Cicuta, Pietro; Cosentino Lagomarsino, Marco

    2016-01-01

    The mean size of exponentially dividing Escherichia coli cells in different nutrient conditions is known to depend on the mean growth rate only. However, the joint fluctuations relating cell size, doubling time, and individual growth rate are only starting to be characterized. Recent studies in bacteria reported a universal trend where the spread in both size and doubling times is a linear function of the population means of these variables. Here we combine experiments and theory and use scaling concepts to elucidate the constraints posed by the second observation on the division control mechanism and on the joint fluctuations of sizes and doubling times. We found that scaling relations based on the means collapse both size and doubling-time distributions across different conditions and explain how the shape of their joint fluctuations deviates from the means. Our data on these joint fluctuations highlight the importance of cell individuality: Single cells do not follow the dependence observed for the means between size and either growth rate or inverse doubling time. Our calculations show that these results emerge from a broad class of division control mechanisms requiring a certain scaling form of the "division hazard rate function," which defines the probability rate of dividing as a function of measurable parameters. This "model free" approach gives a rationale for the universal body-size distributions observed in microbial ecosystems across many microbial species, presumably dividing with multiple mechanisms. Additionally, our experiments show a crossover between fast and slow growth in the relation between individual-cell growth rate and division time, which can be understood in terms of different regimes of genome replication control.

  15. Partitioned learning of deep Boltzmann machines for SNP data.

    PubMed

    Hess, Moritz; Lenz, Stefan; Blätte, Tamara J; Bullinger, Lars; Binder, Harald

    2017-10-15

    Learning the joint distributions of measurements, and in particular identification of an appropriate low-dimensional manifold, has been found to be a powerful ingredient of deep leaning approaches. Yet, such approaches have hardly been applied to single nucleotide polymorphism (SNP) data, probably due to the high number of features typically exceeding the number of studied individuals. After a brief overview of how deep Boltzmann machines (DBMs), a deep learning approach, can be adapted to SNP data in principle, we specifically present a way to alleviate the dimensionality problem by partitioned learning. We propose a sparse regression approach to coarsely screen the joint distribution of SNPs, followed by training several DBMs on SNP partitions that were identified by the screening. Aggregate features representing SNP patterns and the corresponding SNPs are extracted from the DBMs by a combination of statistical tests and sparse regression. In simulated case-control data, we show how this can uncover complex SNP patterns and augment results from univariate approaches, while maintaining type 1 error control. Time-to-event endpoints are considered in an application with acute myeloid leukemia patients, where SNP patterns are modeled after a pre-screening based on gene expression data. The proposed approach identified three SNPs that seem to jointly influence survival in a validation dataset. This indicates the added value of jointly investigating SNPs compared to standard univariate analyses and makes partitioned learning of DBMs an interesting complementary approach when analyzing SNP data. A Julia package is provided at 'http://github.com/binderh/BoltzmannMachines.jl'. binderh@imbi.uni-freiburg.de. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. Joint radius-length distribution as a measure of anisotropic pore eccentricity: an experimental and analytical framework.

    PubMed

    Benjamini, Dan; Basser, Peter J

    2014-12-07

    In this work, we present an experimental design and analytical framework to measure the nonparametric joint radius-length (R-L) distribution of an ensemble of parallel, finite cylindrical pores, and more generally, the eccentricity distribution of anisotropic pores. Employing a novel 3D double pulsed-field gradient acquisition scheme, we first obtain both the marginal radius and length distributions of a population of cylindrical pores and then use these to constrain and stabilize the estimate of the joint radius-length distribution. Using the marginal distributions as constraints allows the joint R-L distribution to be reconstructed from an underdetermined system (i.e., more variables than equations), which requires a relatively small and feasible number of MR acquisitions. Three simulated representative joint R-L distribution phantoms corrupted by different noise levels were reconstructed to demonstrate the process, using this new framework. As expected, the broader the peaks in the joint distribution, the less stable and more sensitive to noise the estimation of the marginal distributions. Nevertheless, the reconstruction of the joint distribution is remarkably robust to increases in noise level; we attribute this characteristic to the use of the marginal distributions as constraints. Axons are known to exhibit local compartment eccentricity variations upon injury; the extent of the variations depends on the severity of the injury. Nonparametric estimation of the eccentricity distribution of injured axonal tissue is of particular interest since generally one cannot assume a parametric distribution a priori. Reconstructing the eccentricity distribution may provide vital information about changes resulting from injury or that occurred during development.

  17. Bayesian network models for error detection in radiotherapy plans

    NASA Astrophysics Data System (ADS)

    Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.

    2015-04-01

    The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.

  18. Joint-layer encoder optimization for HEVC scalable extensions

    NASA Astrophysics Data System (ADS)

    Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong

    2014-09-01

    Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.

  19. Cosmological constraints from the convergence 1-point probability distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, Kenneth; Blazek, Jonathan; Honscheid, Klaus

    2017-06-29

    Here, we examine the cosmological information available from the 1-point probability density function (PDF) of the weak-lensing convergence field, utilizing fast l-picola simulations and a Fisher analysis. We find competitive constraints in the Ωm–σ8 plane from the convergence PDF with 188 arcmin 2 pixels compared to the cosmic shear power spectrum with an equivalent number of modes (ℓ < 886). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is lessmore » susceptible, and improves the total figure of merit by a factor of 2–3, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.« less

  20. Cosmological constraints from the convergence 1-point probability distribution

    NASA Astrophysics Data System (ADS)

    Patton, Kenneth; Blazek, Jonathan; Honscheid, Klaus; Huff, Eric; Melchior, Peter; Ross, Ashley J.; Suchyta, Eric

    2017-11-01

    We examine the cosmological information available from the 1-point probability density function (PDF) of the weak-lensing convergence field, utilizing fast L-PICOLA simulations and a Fisher analysis. We find competitive constraints in the Ωm-σ8 plane from the convergence PDF with 188 arcmin2 pixels compared to the cosmic shear power spectrum with an equivalent number of modes (ℓ < 886). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is less susceptible, and improves the total figure of merit by a factor of 2-3, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.

  1. Integrated Data Analysis for Fusion: A Bayesian Tutorial for Fusion Diagnosticians

    NASA Astrophysics Data System (ADS)

    Dinklage, Andreas; Dreier, Heiko; Fischer, Rainer; Gori, Silvio; Preuss, Roland; Toussaint, Udo von

    2008-03-01

    Integrated Data Analysis (IDA) offers a unified way of combining information relevant to fusion experiments. Thereby, IDA meets with typical issues arising in fusion data analysis. In IDA, all information is consistently formulated as probability density functions quantifying uncertainties in the analysis within the Bayesian probability theory. For a single diagnostic, IDA allows the identification of faulty measurements and improvements in the setup. For a set of diagnostics, IDA gives joint error distributions allowing the comparison and integration of different diagnostics results. Validation of physics models can be performed by model comparison techniques. Typical data analysis applications benefit from IDA capabilities of nonlinear error propagation, the inclusion of systematic effects and the comparison of different physics models. Applications range from outlier detection, background discrimination, model assessment and design of diagnostics. In order to cope with next step fusion device requirements, appropriate techniques are explored for fast analysis applications.

  2. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    PubMed

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  3. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    DOE PAGES

    Clerkin, L.; Kirk, D.; Manera, M.; ...

    2016-08-30

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (kappa_WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the Counts in Cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey (DES) Science Verification data over 139 deg^2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirmmore » that the galaxy density contrast distribution is well modeled by a lognormal PDF convolved with Poisson noise at angular scales from 10-40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as kappa_WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the kappa_WL distribution is well modeled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fit chi^2/DOF of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07 respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.« less

  4. Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps

    NASA Astrophysics Data System (ADS)

    Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.; Amara, A.; Bacon, D.; Chang, C.; Gaztañaga, E.; Hawken, A.; Jain, B.; Joachimi, B.; Vikram, V.; Abbott, T.; Allam, S.; Armstrong, R.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Carrasco Kind, M.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lima, M.; Melchior, P.; Miquel, R.; Nord, B.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Walker, A. R.

    2017-04-01

    It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (κWL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as κWL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the κWL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting χ2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.

  5. Accumulation risk assessment for the flooding hazard

    NASA Astrophysics Data System (ADS)

    Roth, Giorgio; Ghizzoni, Tatiana; Rudari, Roberto

    2010-05-01

    One of the main consequences of the demographic and economic development and of markets and trades globalization is represented by risks cumulus. In most cases, the cumulus of risks intuitively arises from the geographic concentration of a number of vulnerable elements in a single place. For natural events, risks cumulus can be associated, in addition to intensity, also to event's extension. In this case, the magnitude can be such that large areas, that may include many regions or even large portions of different countries, are stroked by single, catastrophic, events. Among natural risks, the impact of the flooding hazard cannot be understated. To cope with, a variety of mitigation actions can be put in place: from the improvement of monitoring and alert systems to the development of hydraulic structures, throughout land use restrictions, civil protection, financial and insurance plans. All of those viable options present social and economic impacts, either positive or negative, whose proper estimate should rely on the assumption of appropriate - present and future - flood risk scenarios. It is therefore necessary to identify proper statistical methodologies, able to describe the multivariate aspects of the involved physical processes and their spatial dependence. In hydrology and meteorology, but also in finance and insurance practice, it has early been recognized that classical statistical theory distributions (e.g., the normal and gamma families) are of restricted use for modeling multivariate spatial data. Recent research efforts have been therefore directed towards developing statistical models capable of describing the forms of asymmetry manifest in data sets. This, in particular, for the quite frequent case of phenomena whose empirical outcome behaves in a non-normal fashion, but still maintains some broad similarity with the multivariate normal distribution. Fruitful approaches were recognized in the use of flexible models, which include the normal distribution as a special or limiting case (e.g., the skew-normal or skew-t distributions). The present contribution constitutes an attempt to provide a better estimation of the joint probability distribution able to describe flood events in a multi-site multi-basin fashion. This goal will be pursued through the multivariate skew-t distribution, which allows to analytically define the joint probability distribution. Performances of the skew-t distribution will be discussed with reference to the Tanaro River in Northwestern Italy. To enhance the characteristics of the correlation structure, both nested and non-nested gauging stations will be selected, with significantly different contributing areas.

  6. Evaluation of scattered light distributions of cw-transillumination for functional diagnostic of rheumatic disorders in interphalangeal joints

    NASA Astrophysics Data System (ADS)

    Prapavat, Viravuth; Schuetz, Rijk; Runge, Wolfram; Beuthan, Juergen; Mueller, Gerhard J.

    1995-12-01

    This paper presents in-vitro-studies using the scattered intensity distribution obtained by cw- transillumination to examine the condition of rheumatic disorders of interphalangeal joints. Inflammation of joints, due to rheumatic diseases, leads to changes in the synovial membrane, synovia composition and content, and anatomic geometrical variations. Measurements have shown that these rheumatic induced inflammation processes result in a variation in optical properties of joint systems. With a scanning system the interphalangeal joint is transilluminated with diode lasers (670 nm, 905 nm) perpendicular to the joint cavity. The detection of the entire distribution of the transmitted radiation intensity was performed with a CCD camera. As a function of the structure and optical properties of the transilluminated volume we achieved distributions of scattered radiation which show characteristic variations in intensity and shape. Using signal and image processing procedures we evaluated the measured scattered distributions regarding their information weight, shape and scale features. Mathematical methods were used to find classification criteria to determine variations of the joint condition.

  7. Identifying Changes in the Probability of High Temperature, High Humidity Heat Wave Events

    NASA Astrophysics Data System (ADS)

    Ballard, T.; Diffenbaugh, N. S.

    2016-12-01

    Understanding how heat waves will respond to climate change is critical for adequate planning and adaptation. While temperature is the primary determinant of heat wave severity, humidity has been shown to play a key role in heat wave intensity with direct links to human health and safety. Here we investigate the individual contributions of temperature and specific humidity to extreme heat wave conditions in recent decades. Using global NCEP-DOE Reanalysis II daily data, we identify regional variability in the joint probability distribution of humidity and temperature. We also identify a statistically significant positive trend in humidity over the eastern U.S. during heat wave events, leading to an increased probability of high humidity, high temperature events. The extent to which we can expect this trend to continue under climate change is complicated due to variability between CMIP5 models, in particular among projections of humidity. However, our results support the notion that heat wave dynamics are characterized by more than high temperatures alone, and understanding and quantifying the various components of the heat wave system is crucial for forecasting future impacts.

  8. Ceramic joints

    DOEpatents

    Miller, Bradley J.; Patten, Jr., Donald O.

    1991-01-01

    Butt joints between materials having different coefficients of thermal expansion are prepared having a reduced probability of failure of stress facture. This is accomplished by narrowing/tapering the material having the lower coefficient of thermal expansion in a direction away from the joint interface and not joining the narrow-tapered surface to the material having the higher coefficient of thermal expansion.

  9. Calculation of a fluctuating entropic force by phase space sampling.

    PubMed

    Waters, James T; Kim, Harold D

    2015-07-01

    A polymer chain pinned in space exerts a fluctuating force on the pin point in thermal equilibrium. The average of such fluctuating force is well understood from statistical mechanics as an entropic force, but little is known about the underlying force distribution. Here, we introduce two phase space sampling methods that can produce the equilibrium distribution of instantaneous forces exerted by a terminally pinned polymer. In these methods, both the positions and momenta of mass points representing a freely jointed chain are perturbed in accordance with the spatial constraints and the Boltzmann distribution of total energy. The constraint force for each conformation and momentum is calculated using Lagrangian dynamics. Using terminally pinned chains in space and on a surface, we show that the force distribution is highly asymmetric with both tensile and compressive forces. Most importantly, the mean of the distribution, which is equal to the entropic force, is not the most probable force even for long chains. Our work provides insights into the mechanistic origin of entropic forces, and an efficient computational tool for unbiased sampling of the phase space of a constrained system.

  10. Adaptive Multi-Agent Systems for Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Bieniawski, Stefan; Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is a new framework for analyzing and controlling distributed systems. Here we demonstrate its use for distributed stochastic optimization. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. The updating of the Lagrange parameters in the Lagrangian can be viewed as a form of automated annealing, that focuses the MAS more and more on the optimal pure strategy. This provides a simple way to map the solution of any constrained optimization problem onto the equilibrium of a Multi-Agent System (MAS). We present computer experiments involving both the Queen s problem and K-SAT validating the predictions of PD theory and its use for off-the-shelf distributed adaptive optimization.

  11. Drainage lineaments in late Quaternary sediments, Ascension and East Baton Rouge Parishes, Louisiana

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birdseye, R.U.; Christians, G.L.; Olson, J.L.

    1988-09-01

    Analysis of conventional aerial photographs, NHAP imagery, and topographic maps covering Ascension and East Baton Rouge Parishes in southeastern Louisiana reveals fine-textured parallel sets of drainage lineaments and numerous fluvial anomalies. Linear physiographic features include stream channels, natural levees, stream valleys, rectangular drainage patterns, and terrace scarps. Late Pleistocene and Holocene surfaces are involved, but only small drainages are affected and no such control is exerted on the Mississippi river. Most lineaments show preferred northeast and northwest trends. Orientations of mapped joint systems are similar to lineament orientations, which suggests that trends of physiographic lineaments are controlled by underlying structure.more » Several surface faults are mapped in the northern portion of the region, all of which strike essentially east-west. Salt domes are located in the subsurface to the south; however, they have no geomorphic expression and do not seem to be associated with the lineaments. Therefore, joints rather than faults or salt diapirs are a likely structural control. Joints may provide paths of weakness along which surface drainage might develop preferentially. Thus, joints probably exert an important control on the geomorphology of the region. The joint pattern appears to be related to the local distribution of the Mesozoic and Cenozoic strata, and may result from regional subsidence due to the thick accumulation of deltaic sediments. Conclusive subsurface data are currently unavailable, and shallow seismic surveys in the future may strengthen the case for an interpretation of structural control of drainage.« less

  12. Training models of anatomic shape variability

    PubMed Central

    Merck, Derek; Tracton, Gregg; Saboo, Rohit; Levy, Joshua; Chaney, Edward; Pizer, Stephen; Joshi, Sarang

    2008-01-01

    Learning probability distributions of the shape of anatomic structures requires fitting shape representations to human expert segmentations from training sets of medical images. The quality of statistical segmentation and registration methods is directly related to the quality of this initial shape fitting, yet the subject is largely overlooked or described in an ad hoc way. This article presents a set of general principles to guide such training. Our novel method is to jointly estimate both the best geometric model for any given image and the shape distribution for the entire population of training images by iteratively relaxing purely geometric constraints in favor of the converging shape probabilities as the fitted objects converge to their target segmentations. The geometric constraints are carefully crafted both to obtain legal, nonself-interpenetrating shapes and to impose the model-to-model correspondences required for useful statistical analysis. The paper closes with example applications of the method to synthetic and real patient CT image sets, including same patient male pelvis and head and neck images, and cross patient kidney and brain images. Finally, we outline how this shape training serves as the basis for our approach to IGRT∕ART. PMID:18777919

  13. Electromigration Mechanism of Failure in Flip-Chip Solder Joints Based on Discrete Void Formation.

    PubMed

    Chang, Yuan-Wei; Cheng, Yin; Helfen, Lukas; Xu, Feng; Tian, Tian; Scheel, Mario; Di Michiel, Marco; Chen, Chih; Tu, King-Ning; Baumbach, Tilo

    2017-12-20

    In this investigation, SnAgCu and SN100C solders were electromigration (EM) tested, and the 3D laminography imaging technique was employed for in-situ observation of the microstructure evolution during testing. We found that discrete voids nucleate, grow and coalesce along the intermetallic compound/solder interface during EM testing. A systematic analysis yields quantitative information on the number, volume, and growth rate of voids, and the EM parameter of DZ*. We observe that fast intrinsic diffusion in SnAgCu solder causes void growth and coalescence, while in the SN100C solder this coalescence was not significant. To deduce the current density distribution, finite-element models were constructed on the basis of the laminography images. The discrete voids do not change the global current density distribution, but they induce the local current crowding around the voids: this local current crowding enhances the lateral void growth and coalescence. The correlation between the current density and the probability of void formation indicates that a threshold current density exists for the activation of void formation. There is a significant increase in the probability of void formation when the current density exceeds half of the maximum value.

  14. Theater Logistics Management: A Case for a Joint Distribution Solution

    DTIC Science & Technology

    2008-03-15

    Multinational (JIIM) operations necessitate creating joint-multinational-based distribution management centers which effectively manage materiel...in the world. However, as the operation continued, the inherent weakness of the intra-theater logistical distribution management link became clear...compounded the distribution management problem. The common thread between each of the noted GAO failures is the lack of a defined joint, theater

  15. On estimating the phase of periodic waveform in additive Gaussian noise, part 2

    NASA Astrophysics Data System (ADS)

    Rauch, L. L.

    1984-11-01

    Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.

  16. On Estimating the Phase of Periodic Waveform in Additive Gaussian Noise, Part 2

    NASA Technical Reports Server (NTRS)

    Rauch, L. L.

    1984-01-01

    Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.

  17. A Bayesian method for inferring transmission chains in a partially observed epidemic.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef M.; Ray, Jaideep

    2008-10-01

    We present a Bayesian approach for estimating transmission chains and rates in the Abakaliki smallpox epidemic of 1967. The epidemic affected 30 individuals in a community of 74; only the dates of appearance of symptoms were recorded. Our model assumes stochastic transmission of the infections over a social network. Distinct binomial random graphs model intra- and inter-compound social connections, while disease transmission over each link is treated as a Poisson process. Link probabilities and rate parameters are objects of inference. Dates of infection and recovery comprise the remaining unknowns. Distributions for smallpox incubation and recovery periods are obtained from historicalmore » data. Using Markov chain Monte Carlo, we explore the joint posterior distribution of the scalar parameters and provide an expected connectivity pattern for the social graph and infection pathway.« less

  18. Passage relevance models for genomics search.

    PubMed

    Urbain, Jay; Frieder, Ophir; Goharian, Nazli

    2009-03-19

    We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.

  19. Bayesian Networks in Educational Assessment

    PubMed Central

    Culbertson, Michael J.

    2015-01-01

    Bayesian networks (BN) provide a convenient and intuitive framework for specifying complex joint probability distributions and are thus well suited for modeling content domains of educational assessments at a diagnostic level. BN have been used extensively in the artificial intelligence community as student models for intelligent tutoring systems (ITS) but have received less attention among psychometricians. This critical review outlines the existing research on BN in educational assessment, providing an introduction to the ITS literature for the psychometric community, and points out several promising research paths. The online appendix lists 40 assessment systems that serve as empirical examples of the use of BN for educational assessment in a variety of domains. PMID:29881033

  20. Changes in recruitment of Rhesus soleus and gastrocnemius muscles following a 14 day spaceflight

    NASA Technical Reports Server (NTRS)

    Hodgson, J. A.; Bodine-Fowler, S. C.; Roy, R. R.; De Leon, R. D.; De Guzman, C. P.; Koslovskaia, I.; Sirota, M.; Edgerton, V. R.

    1991-01-01

    The effect of microgravity on the recruitment patterns of the soleus, gastrocnemius, and tibialis-anterior muscles was investigated by comparing electromyograms (EMGs) of these muscles of Rhesus monkeys implanted with EMG electrodes, taken before and after a 14-day flight on board Cosmos 2044. It was found that the EMG amplitude values in the soleus muscle decreased after the spaceflight but returned to normal values over the 2-wk recovery period. The medial amplitudes of gastrocnemius and tibialis anterior were not changed by flight. Joint probability density distributions displayed changes after flight in both the soleus and gastrocnemius muscles, but not in tibialis anterior.

  1. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data.

    PubMed

    Buderman, Frances E; Diefenbach, Duane R; Casalena, Mary Jo; Rosenberry, Christopher S; Wallingford, Bret D

    2014-04-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50-100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo, to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.

  2. Synchrony in Joint Action Is Directed by Each Participant’s Motor Control System

    PubMed Central

    Noy, Lior; Weiser, Netta; Friedman, Jason

    2017-01-01

    In this work, we ask how the probability of achieving synchrony in joint action is affected by the choice of motion parameters of each individual. We use the mirror game paradigm to study how changes in leader’s motion parameters, specifically frequency and peak velocity, affect the probability of entering the state of co-confidence (CC) motion: a dyadic state of synchronized, smooth and co-predictive motions. In order to systematically study this question, we used a one-person version of the mirror game, where the participant mirrored piece-wise rhythmic movements produced by a computer on a graphics tablet. We systematically varied the frequency and peak velocity of the movements to determine how these parameters affect the likelihood of synchronized joint action. To assess synchrony in the mirror game we used the previously developed marker of co-confident (CC) motions: smooth, jitter-less and synchronized motions indicative of co-predicative control. We found that when mirroring movements with low frequencies (i.e., long duration movements), the participants never showed CC, and as the frequency of the stimuli increased, the probability of observing CC also increased. This finding is discussed in the framework of motor control studies showing an upper limit on the duration of smooth motion. We confirmed the relationship between motion parameters and the probability to perform CC with three sets of data of open-ended two-player mirror games. These findings demonstrate that when performing movements together, there are optimal movement frequencies to use in order to maximize the possibility of entering a state of synchronized joint action. It also shows that the ability to perform synchronized joint action is constrained by the properties of our motor control systems. PMID:28443047

  3. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data

    USGS Publications Warehouse

    Buderman, Frances E.; Diefenbach, Duane R.; Casalena, Mary Jo; Rosenberry, Christopher S.; Wallingford, Bret D.

    2014-01-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo,to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.

  4. Joint modeling of longitudinal data and discrete-time survival outcome.

    PubMed

    Qiu, Feiyou; Stein, Catherine M; Elston, Robert C

    2016-08-01

    A predictive joint shared parameter model is proposed for discrete time-to-event and longitudinal data. A discrete survival model with frailty and a generalized linear mixed model for the longitudinal data are joined to predict the probability of events. This joint model focuses on predicting discrete time-to-event outcome, taking advantage of repeated measurements. We show that the probability of an event in a time window can be more precisely predicted by incorporating the longitudinal measurements. The model was investigated by comparison with a two-step model and a discrete-time survival model. Results from both a study on the occurrence of tuberculosis and simulated data show that the joint model is superior to the other models in discrimination ability, especially as the latent variables related to both survival times and the longitudinal measurements depart from 0. © The Author(s) 2013.

  5. Probability density function modeling of scalar mixing from concentrated sources in turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Franzese, P.; Boybeyi, Z.

    2007-11-01

    Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.

  6. An integrated logit model for contamination event detection in water distribution systems.

    PubMed

    Housh, Mashor; Ostfeld, Avi

    2015-05-15

    The problem of contamination event detection in water distribution systems has become one of the most challenging research topics in water distribution systems analysis. Current attempts for event detection utilize a variety of approaches including statistical, heuristics, machine learning, and optimization methods. Several existing event detection systems share a common feature in which alarms are obtained separately for each of the water quality indicators. Unifying those single alarms from different indicators is usually performed by means of simple heuristics. A salient feature of the current developed approach is using a statistically oriented model for discrete choice prediction which is estimated using the maximum likelihood method for integrating the single alarms. The discrete choice model is jointly calibrated with other components of the event detection system framework in a training data set using genetic algorithms. The fusing process of each indicator probabilities, which is left out of focus in many existing event detection system models, is confirmed to be a crucial part of the system which could be modelled by exploiting a discrete choice model for improving its performance. The developed methodology is tested on real water quality data, showing improved performances in decreasing the number of false positive alarms and in its ability to detect events with higher probabilities, compared to previous studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  8. Independent events in elementary probability theory

    NASA Astrophysics Data System (ADS)

    Csenki, Attila

    2011-07-01

    In Probability and Statistics taught to mathematicians as a first introduction or to a non-mathematical audience, joint independence of events is introduced by requiring that the multiplication rule is satisfied. The following statement is usually tacitly assumed to hold (and, at best, intuitively motivated): If the n events E 1, E 2, … , E n are jointly independent then any two events A and B built in finitely many steps from two disjoint subsets of E 1, E 2, … , E n are also independent. The operations 'union', 'intersection' and 'complementation' are permitted only when forming the events A and B. Here we examine this statement from the point of view of elementary probability theory. The approach described here is accessible also to users of probability theory and is believed to be novel.

  9. Multi-variate joint PDF for non-Gaussianities: exact formulation and generic approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verde, Licia; Jimenez, Raul; Alvarez-Gaume, Luis

    2013-06-01

    We provide an exact expression for the multi-variate joint probability distribution function of non-Gaussian fields primordially arising from local transformations of a Gaussian field. This kind of non-Gaussianity is generated in many models of inflation. We apply our expression to the non-Gaussianity estimation from Cosmic Microwave Background maps and the halo mass function where we obtain analytical expressions. We also provide analytic approximations and their range of validity. For the Cosmic Microwave Background we give a fast way to compute the PDF which is valid up to more than 7σ for f{sub NL} values (both true and sampled) not ruledmore » out by current observations, which consists of expressing the PDF as a combination of bispectrum and trispectrum of the temperature maps. The resulting expression is valid for any kind of non-Gaussianity and is not limited to the local type. The above results may serve as the basis for a fully Bayesian analysis of the non-Gaussianity parameter.« less

  10. Flood Damages- savings potential for Austrian municipalities and evidence of adaptation

    NASA Astrophysics Data System (ADS)

    Unterberger, C.

    2016-12-01

    Recent studies show that the number of extreme precipitation events has increased globally and will continue to do so in the future. These observations are particularly true for central, northern and north-eastern Europe. These changes in the patterns of extreme events have direct repercussions for policy makers. Rojas et al. (2013) find that until 2080, annual damages could increase by a factor of 17 (from €5,5 bn/year today to € 98 bn/year in 2080) in the event that no adaptation measures are taken. Steininger et al. (2015) find that climate and weather induced extreme events account for an annual current welfare loss of about € 1 billion in Austria. As a result, policy makers will need to understand the interaction between hazard, exposure and vulnerability, with the goal of achieving flood risk reduction. Needed is a better understanding of where exposure, vulnerability and eventually flood risk are highest, i.e. where to reduce risk first and which factors drive existing flood risk. This article analyzes direct flood losses as reported by 1153 Austrian municipalities between 2005 and 2013. To achieve comparability between flood damages and municipalities' ordinary spending, a "vulnerability threshold" is introduced suggesting that flood damages should be lower than 5% of municipalities' average annual ordinary spending. It is found that the probability that flood damages exceed this vulnerability threshold is 12%. To provide a reliable estimate for that exceedance probability the joint distribution of damages and spending is modelled by means of a copula approach. Based on the joint distribution, a Monte Carlo simulation is conducted to derive uncertainty ranges for the exceedance probability. To analyze the drivers of flood damages and the effect they have on municipalities' spending, two linear regression models are estimated. Hereby obtained results suggest that damages increase significantly for those municipalities located along the shores of the river Danube and decrease significantly for municipalities that experienced floods in the past- indicating successful adaptation. As for the relationship between flood damages and municipalities' spending, the regression results indicate that flood damages have a significant positive impact.

  11. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  12. A maximum entropy thermodynamics of small systems.

    PubMed

    Dixit, Purushottam D

    2013-05-14

    We present a maximum entropy approach to analyze the state space of a small system in contact with a large bath, e.g., a solvated macromolecular system. For the solute, the fluctuations around the mean values of observables are not negligible and the probability distribution P(r) of the state space depends on the intricate details of the interaction of the solute with the solvent. Here, we employ a superstatistical approach: P(r) is expressed as a marginal distribution summed over the variation in β, the inverse temperature of the solute. The joint distribution P(β, r) is estimated by maximizing its entropy. We also calculate the first order system-size corrections to the canonical ensemble description of the state space. We test the development on a simple harmonic oscillator interacting with two baths with very different chemical identities, viz., (a) Lennard-Jones particles and (b) water molecules. In both cases, our method captures the state space of the oscillator sufficiently well. Future directions and connections with traditional statistical mechanics are discussed.

  13. Joint Stochastic Inversion of Pre-Stack 3D Seismic Data and Well Logs for High Resolution Hydrocarbon Reservoir Characterization

    NASA Astrophysics Data System (ADS)

    Torres-Verdin, C.

    2007-05-01

    This paper describes the successful implementation of a new 3D AVA stochastic inversion algorithm to quantitatively integrate pre-stack seismic amplitude data and well logs. The stochastic inversion algorithm is used to characterize flow units of a deepwater reservoir located in the central Gulf of Mexico. Conventional fluid/lithology sensitivity analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generates typical Class III AVA responses. On the other hand, layer- dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution. Accordingly, AVA stochastic inversion, which combines the advantages of AVA analysis with those of geostatistical inversion, provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties (P-velocity, S-velocity, density), and lithotype (sand- shale) distributions. The quantitative use of rock/fluid information through AVA seismic amplitude data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, yields accurate 3D models of petrophysical properties such as porosity and permeability. Finally, by fully integrating pre-stack seismic amplitude data and well logs, the vertical resolution of inverted products is higher than that of deterministic inversions methods.

  14. Automatic lung tumor segmentation on PET/CT images using fuzzy Markov random field model.

    PubMed

    Guo, Yu; Feng, Yuanming; Sun, Jian; Zhang, Ning; Lin, Wang; Sa, Yu; Wang, Ping

    2014-01-01

    The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  15. Analysis of composite laminates with multiple fasteners by boundary collocation technique

    NASA Astrophysics Data System (ADS)

    Sergeev, Boris Anatolievich

    Mechanical fasteners remain the primary means of load transfer between structural components made of composite laminates. As, in pursuit of increasing efficiency of the structure, the operational load continues to grow, the load carried by each fastener increases accordingly. This accelerates initiation of fatigue-related cracks near the fasteners holes and increases probability of failure. Therefore, the assessment of the stresses around the fastener holes and the stress intensity factors associated with edge cracks becomes critical for damage-tolerant design. Because of the presence of unknown contact stresses and the contact region between the fastener and the laminate, the analysis of a pin-loaded hole becomes considerably more complex than that of a traction-free hole. The accurate prediction of the contact stress distribution along the hole boundary is critical for determining the stress intensity factors and is essential for reliable strength evaluation and failure prediction. This study concerns the development of an analytical methodology, based on the boundary collocation technique, to determine the contact stresses and stress intensity factors required for strength and life prediction of bolted joints with many fasteners. It provides an analytical capability for determining the non-linear contact stresses in mechanically fastened composite laminates while capturing the effects of finite geometry, presence of edge cracks, interaction among fasteners, material anisotropy, fastener flexibility, fastener-hole clearance, friction between the pin and the laminate, and by-pass loading. Also, the proposed approach permits the determination of the fastener load distribution, which significantly influences the failure load of a multi-fastener joint. The well known phenomenon of the fastener tightening torque (clamping force) influence on the load distribution among the different fastener in a multi-fastener joints is taken into account by means of bi-linear representation of the elastic fastener deflection. Finally, two different failure criteria, maximum strains averaged over the characteristic distances and Tsai-Wu criterion, were used to predict the failure load and failure mode in two composite-aluminum joints. The comparison of the present predictions with the published experimental results reveals their agreement.

  16. Fishnet statistics for probabilistic strength and scaling of nacreous imbricated lamellar materials

    NASA Astrophysics Data System (ADS)

    Luo, Wen; Bažant, Zdeněk P.

    2017-12-01

    Similar to nacre (or brick masonry), imbricated (or staggered) lamellar structures are widely found in nature and man-made materials, and are of interest for biomimetics. They can achieve high defect insensitivity and fracture toughness, as demonstrated in previous studies. But the probability distribution with a realistic far-left tail is apparently unknown. Here, strictly for statistical purposes, the microstructure of nacre is approximated by a diagonally pulled fishnet with quasibrittle links representing the shear bonds between parallel lamellae (or platelets). The probability distribution of fishnet strength is calculated as a sum of a rapidly convergent series of the failure probabilities after the rupture of one, two, three, etc., links. Each of them represents a combination of joint probabilities and of additive probabilities of disjoint events, modified near the zone of failed links by the stress redistributions caused by previously failed links. Based on previous nano- and multi-scale studies at Northwestern, the strength distribution of each link, characterizing the interlamellar shear bond, is assumed to be a Gauss-Weibull graft, but with a deeper Weibull tail than in Type 1 failure of non-imbricated quasibrittle materials. The autocorrelation length is considered equal to the link length. The size of the zone of failed links at maximum load increases with the coefficient of variation (CoV) of link strength, and also with fishnet size. With an increasing width-to-length aspect ratio, a rectangular fishnet gradually transits from the weakest-link chain to the fiber bundle, as the limit cases. The fishnet strength at failure probability 10-6 grows with the width-to-length ratio. For a square fishnet boundary, the strength at 10-6 failure probability is about 11% higher, while at fixed load the failure probability is about 25-times higher than it is for the non-imbricated case. This is a major safety advantage of the fishnet architecture over particulate or fiber reinforced materials. There is also a strong size effect, partly similar to that of Type 1 while the curves of log-strength versus log-size for different sizes could cross each other. The predicted behavior is verified by about a million Monte Carlo simulations for each of many fishnet geometries, sizes and CoVs of link strength. In addition to the weakest-link or fiber bundle, the fishnet becomes the third analytically tractable statistical model of structural strength, and has the former two as limit cases.

  17. Discussion on joint operation of wind farm and pumped-storage hydroplant

    NASA Astrophysics Data System (ADS)

    Li, Caifang; Wu, Yichun; Liang, Hao; Li, Miao

    2017-12-01

    Due to the random fluctuations in wind power, large amounts of grid integration will have a negative impact on grid operation and the consumers. The joint operation with pumped-storage hydroplant with good peak shaving performance can effectively reduce the negative impact on the safety and economic operation of power grid, and improve the utilization of wind power. In addition, joint operation can achieve the optimization of green power and improve the comprehensive economic benefits. Actually, the rational profit distribution of joint operation is the premise of sustainable and stable cooperation. This paper focuses on the profit distribution of joint operation, and applies improved shapely value method, which taking the investments and the contributions of each participant in the cooperation into account, to determine the profit distribution. Moreover, the distribution scheme can provide an effective reference for the actual joint operation of wind farm and pumped-storage hydroplant.

  18. Water shortage risk assessment considering large-scale regional transfers: a copula-based uncertainty case study in Lunan, China.

    PubMed

    Gao, Xueping; Liu, Yinzhu; Sun, Bowen

    2018-06-05

    The risk of water shortage caused by uncertainties, such as frequent drought, varied precipitation, multiple water resources, and different water demands, brings new challenges to the water transfer projects. Uncertainties exist for transferring water and local surface water; therefore, the relationship between them should be thoroughly studied to prevent water shortage. For more effective water management, an uncertainty-based water shortage risk assessment model (UWSRAM) is developed to study the combined effect of multiple water resources and analyze the shortage degree under uncertainty. The UWSRAM combines copula-based Monte Carlo stochastic simulation and the chance-constrained programming-stochastic multiobjective optimization model, using the Lunan water-receiving area in China as an example. Statistical copula functions are employed to estimate the joint probability of available transferring water and local surface water and sampling from the multivariate probability distribution, which are used as inputs for the optimization model. The approach reveals the distribution of water shortage and is able to emphasize the importance of improving and updating transferring water and local surface water management, and examine their combined influence on water shortage risk assessment. The possible available water and shortages can be calculated applying the UWSRAM, also with the corresponding allocation measures under different water availability levels and violating probabilities. The UWSRAM is valuable for mastering the overall multi-water resource and water shortage degree, adapting to the uncertainty surrounding water resources, establishing effective water resource planning policies for managers and achieving sustainable development.

  19. First-passage problems: A probabilistic dynamic analysis for degraded structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1990-01-01

    Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.

  20. Characterizing the topology of probabilistic biological networks.

    PubMed

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-01-01

    Biological interactions are often uncertain events, that may or may not take place with some probability. This uncertainty leads to a massive number of alternative interaction topologies for each such network. The existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. In this paper, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. Using our mathematical representation, we develop a method that can accurately describe the degree distribution of such networks. We also take one more step and extend our method to accurately compute the joint-degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. Our method works quickly even for entire protein-protein interaction (PPI) networks. It also helps us find an adequate mathematical model using MLE. We perform a comparative study of node-degree and joint-degree distributions in two types of biological networks: the classical deterministic networks and the more flexible probabilistic networks. Our results confirm that power-law and log-normal models best describe degree distributions for both probabilistic and deterministic networks. Moreover, the inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected. We also show that probabilistic networks are more robust for node-degree distribution computation than the deterministic ones. all the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/projects/probNet/.

  1. Cellular Automata Generalized To An Inferential System

    NASA Astrophysics Data System (ADS)

    Blower, David J.

    2007-11-01

    Stephen Wolfram popularized elementary one-dimensional cellular automata in his book, A New Kind of Science. Among many remarkable things, he proved that one of these cellular automata was a Universal Turing Machine. Such cellular automata can be interpreted in a different way by viewing them within the context of the formal manipulation rules from probability theory. Bayes's Theorem is the most famous of such formal rules. As a prelude, we recapitulate Jaynes's presentation of how probability theory generalizes classical logic using modus ponens as the canonical example. We emphasize the important conceptual standing of Boolean Algebra for the formal rules of probability manipulation and give an alternative demonstration augmenting and complementing Jaynes's derivation. We show the complementary roles played in arguments of this kind by Bayes's Theorem and joint probability tables. A good explanation for all of this is afforded by the expansion of any particular logic function via the disjunctive normal form (DNF). The DNF expansion is a useful heuristic emphasized in this exposition because such expansions point out where relevant 0s should be placed in the joint probability tables for logic functions involving any number of variables. It then becomes a straightforward exercise to rely on Boolean Algebra, Bayes's Theorem, and joint probability tables in extrapolating to Wolfram's cellular automata. Cellular automata are seen as purely deductive systems, just like classical logic, which probability theory is then able to generalize. Thus, any uncertainties which we might like to introduce into the discussion about cellular automata are handled with ease via the familiar inferential path. Most importantly, the difficult problem of predicting what cellular automata will do in the far future is treated like any inferential prediction problem.

  2. Extreme-value statistics of fractional Brownian motion bridges.

    PubMed

    Delorme, Mathieu; Wiese, Kay Jörg

    2016-11-01

    Fractional Brownian motion is a self-affine, non-Markovian, and translationally invariant generalization of Brownian motion, depending on the Hurst exponent H. Here we investigate fractional Brownian motion where both the starting and the end point are zero, commonly referred to as bridge processes. Observables are the time t_{+} the process is positive, the maximum m it achieves, and the time t_{max} when this maximum is taken. Using a perturbative expansion around Brownian motion (H=1/2), we give the first-order result for the probability distribution of these three variables and the joint distribution of m and t_{max}. Our analytical results are tested and found to be in excellent agreement, with extensive numerical simulations for both H>1/2 and H<1/2. This precision is achieved by sampling processes with a free end point and then converting each realization to a bridge process, in generalization to what is usually done for Brownian motion.

  3. Microwave inversion of leaf area and inclination angle distributions from backscattered data

    NASA Technical Reports Server (NTRS)

    Lang, R. H.; Saleh, H. A.

    1985-01-01

    The backscattering coefficient from a slab of thin randomly oriented dielectric disks over a flat lossy ground is used to reconstruct the inclination angle and area distributions of the disks. The disks are employed to model a leafy agricultural crop, such as soybeans, in the L-band microwave region of the spectrum. The distorted Born approximation, along with a thin disk approximation, is used to obtain a relationship between the horizontal-like polarized backscattering coefficient and the joint probability density of disk inclination angle and disk radius. Assuming large skin depth reduces the relationship to a linear Fredholm integral equation of the first kind. Due to the ill-posed nature of this equation, a Phillips-Twomey regularization method with a second difference smoothing condition is used to find the inversion. Results are obtained in the presence of 1 and 10 percent noise for both leaf inclination angle and leaf radius densities.

  4. Eigenvalue statistics for the sum of two complex Wishart matrices

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh

    2014-09-01

    The sum of independent Wishart matrices, taken from distributions with unequal covariance matrices, plays a crucial role in multivariate statistics, and has applications in the fields of quantitative finance and telecommunication. However, analytical results concerning the corresponding eigenvalue statistics have remained unavailable, even for the sum of two Wishart matrices. This can be attributed to the complicated and rotationally noninvariant nature of the matrix distribution that makes extracting the information about eigenvalues a nontrivial task. Using a generalization of the Harish-Chandra-Itzykson-Zuber integral, we find exact solution to this problem for the complex Wishart case when one of the covariance matrices is proportional to the identity matrix, while the other is arbitrary. We derive exact and compact expressions for the joint probability density and marginal density of eigenvalues. The analytical results are compared with numerical simulations and we find perfect agreement.

  5. Improved statistical fluctuation analysis for measurement-device-independent quantum key distribution with four-intensity decoy-state method.

    PubMed

    Mao, Chen-Chen; Zhou, Xing-Yu; Zhu, Jian-Rong; Zhang, Chun-Hui; Zhang, Chun-Mei; Wang, Qin

    2018-05-14

    Recently Zhang et al [ Phys. Rev. A95, 012333 (2017)] developed a new approach to estimate the failure probability for the decoy-state BB84 QKD system when taking finite-size key effect into account, which offers security comparable to Chernoff bound, while results in an improved key rate and transmission distance. Based on Zhang et al's work, now we extend this approach to the case of the measurement-device-independent quantum key distribution (MDI-QKD), and for the first time implement it onto the four-intensity decoy-state MDI-QKD system. Moreover, through utilizing joint constraints and collective error-estimation techniques, we can obviously increase the performance of practical MDI-QKD systems compared with either three- or four-intensity decoy-state MDI-QKD using Chernoff bound analysis, and achieve much higher level security compared with those applying Gaussian approximation analysis.

  6. The perception of probability.

    PubMed

    Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E

    2014-01-01

    We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  7. Effect of stress concentration on the fatigue strength of A7N01S-T5 welded joints

    NASA Astrophysics Data System (ADS)

    Zhang, Mingyue; Gou, Guoqing; Hang, Zongqiu; Chen, Hui

    2017-07-01

    Stress concentration is a key factor that affects the fatigue strength of welded joints. In this study, the fatigue strengths of butt joints with and without the weld reinforcement were tested to quantify the effect of stress concentration. The fatigue strength of the welded joints was measured with a high-frequency fatigue machine. The P-S-N curves were drawn under different confidence levels and failure probabilities. The results show that butt joints with the weld reinforcement have much lower fatigue strength than joints without the weld reinforcement. Therefore, stress concentration introduced by the weld reinforcement should be controlled.

  8. Bayesian source tracking via focalization and marginalization in an uncertain Mediterranean Sea environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L

    2010-07-01

    This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.

  9. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  10. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  11. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions.

    PubMed

    Potter, Gail E; Smieszek, Timo; Sailer, Kerstin

    2015-09-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0-5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models.

  12. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions

    PubMed Central

    Potter, Gail E.; Smieszek, Timo; Sailer, Kerstin

    2015-01-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0–5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models. PMID:26634122

  13. Back to Normal! Gaussianizing posterior distributions for cosmological probes

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2014-05-01

    We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.

  14. Characterizing Topology of Probabilistic Biological Networks.

    PubMed

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-09-06

    Biological interactions are often uncertain events, that may or may not take place with some probability. Existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. Here, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. We develop a method that accurately describes the degree distribution of such networks. We also extend our method to accurately compute the joint degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. It also helps us find an adequate mathematical model using maximum likelihood estimation. Our results demonstrate that power law and log-normal models best describe degree distributions for probabilistic networks. The inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected.

  15. Probability of spacesuit-induced fingernail trauma is associated with hand circumference.

    PubMed

    Opperman, Roedolph A; Waldie, James M A; Natapoff, Alan; Newman, Dava J; Jones, Jeffrey A

    2010-10-01

    A significant number of astronauts sustain hand injuries during extravehicular activity training and operations. These hand injuries have been known to cause fingernail delamination (onycholysis) that requires medical intervention. This study investigated correlations between the anthropometrics of the hand and susceptibility to injury. The analysis explored the hypothesis that crewmembers with a high finger-to-hand size ratio are more likely to experience injuries. A database of 232 crewmembers' injury records and anthropometrics was sourced from NASA Johnson Space Center. No significant effect of finger-to-hand size was found on the probability of injury, but circumference and width of the metacarpophalangeal (MCP) joint were found to be significantly associated with injuries by the Kruskal-Wallis test. A multivariate logistic regression showed that hand circumference is the dominant effect on the likelihood of onycholysis. Male crewmembers with a hand circumference > 22.86 cm (9") have a 19.6% probability of finger injury, but those with hand circumferences < or = 22.86 cm (9") only have a 5.6% chance of injury. Findings were similar for female crewmembers. This increased probability may be due to constriction at large MCP joints by the current NASA Phase VI glove. Constriction may lead to occlusion of vascular flow to the fingers that may increase the chances of onycholysis. Injury rates are lower on gloves such as the superseded series 4000 and the Russian Orlan that provide more volume for the MCP joint. This suggests that we can reduce onycholysis by modifying the design of the current gloves at the MCP joint.

  16. Transport of passive scalars in a turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Kim, John; Moin, Parviz

    1987-01-01

    A direct numerical simulation of a turbulent channel flow with three passive scalars at different molecular Prandtl numbers is performed. Computed statistics including the turbulent Prandtl numbers are compared with existing experimental data. The computed fields are also examined to investigate the spatial structure of the scalar fields. The scalar fields are highly correlated with the streamwise velocity; the correlation coefficient between the temperature and the streamwise velocity is as high as 0.95 in the wall region. The joint probability distributions between the temperature and velocity fluctuations are also examined; they suggest that it might be possible to model the scalar fluxes in the wall region in a manner similar to the Reynolds stresses.

  17. Mathematical and physical meaning of the Bell inequalities

    NASA Astrophysics Data System (ADS)

    Santos, Emilio

    2016-09-01

    It is shown that the Bell inequalities are closely related to the triangle inequalities involving distance functions amongst pairs of random variables with values \\{0,1\\}. A hidden variables model may be defined as a mapping between a set of quantum projection operators and a set of random variables. The model is noncontextual if there is a joint probability distribution. The Bell inequalities are necessary conditions for its existence. The inequalities are most relevant when measurements are performed at space-like separation, thus showing a conflict between quantum mechanics and local realism (Bell's theorem). The relations of the Bell inequalities with contextuality, the Kochen-Specker theorem, and quantum entanglement are briefly discussed.

  18. Land-Surface Subsidence and Open Bedrock Fractures in the Tully Valley, Onondaga County, New York

    USGS Publications Warehouse

    Hackett, William R.; Gleason, Gayle C.; Kappel, William M.

    2009-01-01

    Open bedrock fractures were mapped in and near two brine field areas in Tully Valley, New York. More than 400 open fractures and closed joints were mapped for dimension, orientation, and distribution along the east and west valley walls adjacent to two former brine fields. The bedrock fractures are as much as 2 feet wide and over 50 feet deep, while linear depressions in the soil, which are 3 to 10 feet wide and 3 to 6 feet deep, indicate the presence of open bedrock fractures below the soil. The fractures are probably the result of solution mining of halite deposits about 1,200 feet below the land surface.

  19. Minimal Distance to Approximating Noncontextual System as a Measure of Contextuality

    NASA Astrophysics Data System (ADS)

    Kujala, Janne V.

    2017-07-01

    Let random vectors Rc={Rpc:p\\in Pc} represent joint measurements of certain subsets Pc\\subset P of properties p\\in P in different contexts c\\in C. Such a system is traditionally called noncontextual if there exists a jointly distributed set {Qp:p\\in P} of random variables such that Rc has the same distribution as {Qp:p\\in Pc} for all c\\in C. A trivial necessary condition for noncontextuality and a precondition for many measures of contextuality is that the system is consistently connected, i.e., all Rpc,Rp^{c^' }},\\dots measuring the same property p\\in P have the same distribution. The contextuality-by-default (CbD) approach allows defining more general measures of contextuality that apply to inconsistently connected systems as well, but at the price of a higher computational cost. In this paper we propose a novel measure of contextuality that shares the generality of the CbD approach and the computational benefits of the previously proposed negative probability (NP) approach. The present approach differs from CbD in that instead of considering all possible joints of the double-indexed random variables Rpc, it considers all possible approximating single-indexed systems {Qp:p\\in P}. The degree of contextuality is defined based on the minimum possible probabilistic distance of the actual measurements Rc from {Qp:p\\in Pc}. We show that this measure, called the optimal approximation (OA) measure, agrees with a certain measure of contextuality of the CbD approach for all systems where each property enters in exactly two contexts. The OA measure can be calculated far more efficiently than the CbD measure and even more efficiently than the NP measure for sufficiently large systems. We also define a variant, the OA-NP measure of contextuality that agrees with the NP measure for consistently connected (non-signaling) systems while extending it to inconsistently connected systems.

  20. Joint coverage probability in a simulation study on Continuous-Time Markov Chain parameter estimation.

    PubMed

    Benoit, Julia S; Chan, Wenyaw; Doody, Rachelle S

    2015-01-01

    Parameter dependency within data sets in simulation studies is common, especially in models such as Continuous-Time Markov Chains (CTMC). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: 1) to develop a multivariate approach for assessing accuracy and precision for simulation studies 2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.

  1. Correlations between polarisation states of W particles in the reaction e - e +→ W - W + at LEP2 energies 189-209 GeV

    NASA Astrophysics Data System (ADS)

    Abdallah, J.; Abreu, P.; Adam, W.; Adzic, P.; Albrecht, T.; Alemany-Fernandez, R.; Allmendinger, T.; Allport, P. P.; Amaldi, U.; Amapane, N.; Amato, S.; Anashkin, E.; Andreazza, A.; Andringa, S.; Anjos, N.; Antilogus, P.; Apel, W.-D.; Arnoud, Y.; Ask, S.; Asman, B.; Augustin, J. E.; Augustinus, A.; Baillon, P.; Ballestrero, A.; Bambade, P.; Barbier, R.; Bardin, D.; Barker, G. J.; Baroncelli, A.; Battaglia, M.; Baubillier, M.; Becks, K.-H.; Begalli, M.; Behrmann, A.; Ben-Haim, E.; Benekos, N.; Benvenuti, A.; Berat, C.; Berggren, M.; Bertrand, D.; Besancon, M.; Besson, N.; Bloch, D.; Blom, M.; Bluj, M.; Bonesini, M.; Boonekamp, M.; Booth, P. S. L.; Borisov, G.; Botner, O.; Bouquet, B.; Bowcock, T. J. V.; Boyko, I.; Bracko, M.; Brenner, R.; Brodet, E.; Bruckman, P.; Brunet, J. M.; Buschbeck, B.; Buschmann, P.; Calvi, M.; Camporesi, T.; Canale, V.; Carena, F.; Castro, N.; Cavallo, F.; Chapkin, M.; Charpentier, Ph.; Checchia, P.; Chierici, R.; Chliapnikov, P.; Chudoba, J.; Chung, S. U.; Cieslik, K.; Collins, P.; Contri, R.; Cosme, G.; Cossutti, F.; Costa, M. J.; Crennell, D.; Cuevas, J.; D'Hondt, J.; da Silva, T.; da Silva, W.; Della Ricca, G.; de Angelis, A.; de Boer, W.; de Clercq, C.; de Lotto, B.; de Maria, N.; de Min, A.; de Paula, L.; di Ciaccio, L.; di Simone, A.; Doroba, K.; Drees, J.; Eigen, G.; Ekelof, T.; Ellert, M.; Elsing, M.; Espirito Santo, M. C.; Fanourakis, G.; Fassouliotis, D.; Feindt, M.; Fernandez, J.; Ferrer, A.; Ferro, F.; Flagmeyer, U.; Foeth, H.; Fokitis, E.; Fulda-Quenzer, F.; Fuster, J.; Gandelman, M.; Garcia, C.; Gavillet, Ph.; Gazis, E.; Gokieli, R.; Golob, B.; Gomez-Ceballos, G.; Goncalves, P.; Graziani, E.; Grosdidier, G.; Grzelak, K.; Guy, J.; Haag, C.; Hallgren, A.; Hamacher, K.; Hamilton, K.; Haug, S.; Hauler, F.; Hedberg, V.; Hennecke, M.; Hoffman, J.; Holmgren, S.-O.; Holt, P. J.; Houlden, M. A.; Jackson, J. N.; Jarlskog, G.; Jarry, P.; Jeans, D.; Johansson, E. K.; Jonsson, P.; Joram, C.; Jungermann, L.; Kapusta, F.; Katsanevas, S.; Katsoufis, E.; Kernel, G.; Kersevan, B. P.; Kerzel, U.; King, B. T.; Kjaer, N. J.; Kluit, P.; Kokkinias, P.; Kourkoumelis, C.; Kouznetsov, O.; Krumstein, Z.; Kucharczyk, M.; Lamsa, J.; Leder, G.; Ledroit, F.; Leinonen, L.; Leitner, R.; Lemonne, J.; Lepeltier, V.; Lesiak, T.; Liebig, W.; Liko, D.; Lipniacka, A.; Lopes, J. H.; Lopez, J. M.; Loukas, D.; Lutz, P.; Lyons, L.; MacNaughton, J.; Malek, A.; Maltezos, S.; Mandl, F.; Marco, J.; Marco, R.; Marechal, B.; Margoni, M.; Marin, J.-C.; Mariotti, C.; Markou, A.; Martinez-Rivero, C.; Masik, J.; Mastroyiannopoulos, N.; Matorras, F.; Matteuzzi, C.; Mazzucato, F.; Mazzucato, M.; McNulty, R.; Meroni, C.; Migliore, E.; Mitaroff, W.; Mjoernmark, U.; Moa, T.; Moch, M.; Moenig, K.; Monge, R.; Montenegro, J.; Moraes, D.; Moreno, S.; Morettini, P.; Mueller, U.; Muenich, K.; Mulders, M.; Mundim, L.; Murray, W.; Muryn, B.; Myatt, G.; Myklebust, T.; Nassiakou, M.; Navarria, F.; Nawrocki, K.; Nemecek, S.; Nicolaidou, R.; Nikolenko, M.; Oblakowska-Mucha, A.; Obraztsov, V.; Olshevski, A.; Onofre, A.; Orava, R.; Osterberg, K.; Ouraou, A.; Oyanguren, A.; Paganoni, M.; Paiano, S.; Palacios, J. P.; Palka, H.; Papadopoulou, Th. D.; Pape, L.; Parkes, C.; Parodi, F.; Parzefall, U.; Passeri, A.; Passon, O.; Peralta, L.; Perepelitsa, V.; Perrotta, A.; Petrolini, A.; Piedra, J.; Pieri, L.; Pierre, F.; Pimenta, M.; Piotto, E.; Podobnik, T.; Poireau, V.; Pol, M. E.; Polok, G.; Pozdniakov, V.; Pukhaeva, N.; Pullia, A.; Radojicic, D.; Rebecchi, P.; Rehn, J.; Reid, D.; Reinhardt, R.; Renton, P.; Richard, F.; Ridky, J.; Rivero, M.; Rodriguez, D.; Romero, A.; Ronchese, P.; Roudeau, P.; Rovelli, T.; Ruhlmann-Kleider, V.; Ryabtchikov, D.; Sadovsky, A.; Salmi, L.; Salt, J.; Sander, C.; Savoy-Navarro, A.; Schwickerath, U.; Sekulin, R.; Siebel, M.; Sisakian, A.; Smadja, G.; Smirnova, O.; Sokolov, A.; Sopczak, A.; Sosnowski, R.; Spassov, T.; Stanitzki, M.; Stocchi, A.; Strauss, J.; Stugu, B.; Szczekowski, M.; Szeptycka, M.; Szumlak, T.; Tabarelli, T.; Tegenfeldt, F.; Timmermans, J.; Tkatchev, L.; Tobin, M.; Todorovova, S.; Tome, B.; Tonazzo, A.; Tortosa, P.; Travnicek, P.; Treille, D.; Tristram, G.; Trochimczuk, M.; Troncon, C.; Turluer, M.-L.; Tyapkin, I. A.; Tyapkin, P.; Tzamarias, S.; Uvarov, V.; Valenti, G.; van Dam, P.; van Eldik, J.; van Remortel, N.; van Vulpen, I.; Vegni, G.; Veloso, F.; Venus, W.; Verdier, P.; Verzi, V.; Vilanova, D.; Vitale, L.; Vrba, V.; Wahlen, H.; Washbrook, A. J.; Weiser, C.; Wicke, D.; Wickens, J.; Wilkinson, G.; Winter, M.; Witek, M.; Yushchenko, O.; Zalewska, A.; Zalewski, P.; Zavrtanik, D.; Zhuravlov, V.; Zimin, N. I.; Zintchenko, A.; Zupan, M.

    2009-10-01

    In a study of the reaction e - e +→ W - W + with the DELPHI detector, the probabilities of the two W particles occurring in the joint polarisation states transverse-transverse ( TT), longitudinal-transverse plus transverse-longitudinal ( LT) and longitudinal-longitudinal ( LL) have been determined using the final states WW{rightarrow}lν qbar{q} ( l= e, μ). The two-particle joint polarisation probabilities, i.e. the spin density matrix elements ρ TT , ρ LT , ρ LL , are measured as functions of the W - production angle, θ _{W-}, at an average reaction energy of 198.2 GeV. Averaged over all \\cosθ_{W-}, the following joint probabilities are obtained: bar{ρ}_{TT}=(67±8)%, bar{ρ}_{LT}=(30±8)%, bar{ρ}_{LL}=(3±7)%. These results are in agreement with the Standard Model predictions of 63.0%, 28.9% and 8.1%, respectively. The related polarisation cross-sections σ TT , σ LT and σ LL are also presented.

  2. Approximate Bayesian evaluations of measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Bodnar, Olha

    2018-04-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.

  3. Periprosthetic Joint Infections: Clinical and Bench Research

    PubMed Central

    Legout, Laurence; Senneville, Eric

    2013-01-01

    Prosthetic joint infection is a devastating complication with high morbidity and substantial cost. The incidence is low but probably underestimated. Despite a significant basic and clinical research in this field, many questions concerning the definition of prosthetic infection as well the diagnosis and the management of these infections remained unanswered. We review the current literature about the new diagnostic methods, the management and the prevention of prosthetic joint infections. PMID:24288493

  4. Lower extremity control during turns initiated with and without hip external rotation.

    PubMed

    Zaferiou, Antonia M; Flashner, Henryk; Wilcox, Rand R; McNitt-Gray, Jill L

    2017-02-08

    The pirouette turn is often initiated in neutral and externally rotated hip positions by dancers. This provides an opportunity to investigate how dancers satisfy the same mechanical objectives at the whole-body level when using different leg kinematics. The purpose of this study was to compare lower extremity control strategies during the turn initiation phase of pirouettes performed with and without hip external rotation. Skilled dancers (n=5) performed pirouette turns with and without hip external rotation. Joint kinetics during turn initiation were determined for both legs using ground reaction forces (GRFs) and segment kinematics. Hip muscle activations were monitored using electromyography. Using probability-based statistical methods, variables were compared across turn conditions as a group and within-dancer. Despite differences in GRFs and impulse generation between turn conditions, at least 90% of each GRF was aligned with the respective leg plane. A majority of the net joint moments at the ankle, knee, and hip acted about an axis perpendicular to the leg plane. However, differences in shank alignment relative to the leg plane affected the distribution of the knee net joint moment when represented with respect to the shank versus the thigh. During the initiation of both turns, most participants used ankle plantar flexor moments, knee extensor moments, flexor and abductor moments at the push leg׳s hip, and extensor and abductor moments at the turn leg׳s hip. Representation of joint kinetics using multiple reference systems assisted in understanding control priorities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Spatial extent of branching Brownian motion.

    PubMed

    Ramola, Kabir; Majumdar, Satya N; Schehr, Grégory

    2015-04-01

    We study the one-dimensional branching Brownian motion starting at the origin and investigate the correlation between the rightmost (X(max)≥0) and leftmost (X(min)≤0) visited sites up to time t. At each time step the existing particles in the system either diffuse (with diffusion constant D), die (with rate a), or split into two particles (with rate b). We focus on the regime b≤a where these two extreme values X(max) and X(min) are strongly correlated. We show that at large time t, the joint probability distribution function (PDF) of the two extreme points becomes stationary P(X,Y,t→∞)→p(X,Y). Our exact results for p(X,Y) demonstrate that the correlation between X(max) and X(min) is nonzero, even in the stationary state. From this joint PDF, we compute exactly the stationary PDF p(ζ) of the (dimensionless) span ζ=(X(max)-X(min))/√[D/b], which is the distance between the rightmost and leftmost visited sites. This span distribution is characterized by a linear behavior p(ζ)∼1/2(1+Δ)ζ for small spans, with Δ=(a/b-1). In the critical case (Δ=0) this distribution has a nontrivial power law tail p(ζ)∼8π√[3]/ζ(3) for large spans. On the other hand, in the subcritical case (Δ>0), we show that the span distribution decays exponentially as p(ζ)∼(A(2)/2)ζexp(-√[Δ]ζ) for large spans, where A is a nontrivial function of Δ, which we compute exactly. We show that these asymptotic behaviors carry the signatures of the correlation between X(max) and X(min). Finally we verify our results via direct Monte Carlo simulations.

  6. Spatial extent of branching Brownian motion

    NASA Astrophysics Data System (ADS)

    Ramola, Kabir; Majumdar, Satya N.; Schehr, Grégory

    2015-04-01

    We study the one-dimensional branching Brownian motion starting at the origin and investigate the correlation between the rightmost (Xmax≥0 ) and leftmost (Xmin≤0 ) visited sites up to time t . At each time step the existing particles in the system either diffuse (with diffusion constant D ), die (with rate a ), or split into two particles (with rate b ). We focus on the regime b ≤a where these two extreme values Xmax and Xmin are strongly correlated. We show that at large time t , the joint probability distribution function (PDF) of the two extreme points becomes stationary P (X ,Y ,t →∞ )→p (X ,Y ) . Our exact results for p (X ,Y ) demonstrate that the correlation between Xmax and Xmin is nonzero, even in the stationary state. From this joint PDF, we compute exactly the stationary PDF p (ζ ) of the (dimensionless) span ζ =(Xmax-Xmin) /√{D /b } , which is the distance between the rightmost and leftmost visited sites. This span distribution is characterized by a linear behavior p (ζ ) ˜1/2 (1 +Δ ) ζ for small spans, with Δ =(a/b -1 ) . In the critical case (Δ =0 ) this distribution has a nontrivial power law tail p (ζ ) ˜8 π √{3 }/ζ3 for large spans. On the other hand, in the subcritical case (Δ >0 ), we show that the span distribution decays exponentially as p (ζ ) ˜(A2/2 ) ζ exp(-√{Δ }ζ ) for large spans, where A is a nontrivial function of Δ , which we compute exactly. We show that these asymptotic behaviors carry the signatures of the correlation between Xmax and Xmin. Finally we verify our results via direct Monte Carlo simulations.

  7. Joint analysis of epistemic and aleatory uncertainty in stability analysis for geo-hazard assessments

    NASA Astrophysics Data System (ADS)

    Rohmer, Jeremy; Verdel, Thierry

    2017-04-01

    Uncertainty analysis is an unavoidable task of stability analysis of any geotechnical systems. Such analysis usually relies on the safety factor SF (if SF is below some specified threshold), the failure is possible). The objective of the stability analysis is then to estimate the failure probability P for SF to be below the specified threshold. When dealing with uncertainties, two facets should be considered as outlined by several authors in the domain of geotechnics, namely "aleatoric uncertainty" (also named "randomness" or "intrinsic variability") and "epistemic uncertainty" (i.e. when facing "vague, incomplete or imprecise information" such as limited databases and observations or "imperfect" modelling). The benefits of separating both facets of uncertainty can be seen from a risk management perspective because: - Aleatoric uncertainty, being a property of the system under study, cannot be reduced. However, practical actions can be taken to circumvent the potentially dangerous effects of such variability; - Epistemic uncertainty, being due to the incomplete/imprecise nature of available information, can be reduced by e.g., increasing the number of tests (lab or in site survey), improving the measurement methods or evaluating calculation procedure with model tests, confronting more information sources (expert opinions, data from literature, etc.). Uncertainty treatment in stability analysis usually restricts to the probabilistic framework to represent both facets of uncertainty. Yet, in the domain of geo-hazard assessments (like landslides, mine pillar collapse, rockfalls, etc.), the validity of this approach can be debatable. In the present communication, we propose to review the major criticisms available in the literature against the systematic use of probability in situations of high degree of uncertainty. On this basis, the feasibility of using a more flexible uncertainty representation tool is then investigated, namely Possibility distributions (e.g., Baudrit et al., 2007) for geo-hazard assessments. A graphical tool is then developed to explore: 1. the contribution of both types of uncertainty, aleatoric and epistemic; 2. the regions of the imprecise or random parameters which contribute the most to the imprecision on the failure probability P. The method is applied on two case studies (a mine pillar and a steep slope stability analysis, Rohmer and Verdel, 2014) to investigate the necessity for extra data acquisition on parameters whose imprecision can hardly be modelled by probabilities due to the scarcity of the available information (respectively the extraction ratio and the cliff geometry). References Baudrit, C., Couso, I., & Dubois, D. (2007). Joint propagation of probability and possibility in risk analysis: Towards a formal framework. International Journal of Approximate Reasoning, 45(1), 82-105. Rohmer, J., & Verdel, T. (2014). Joint exploration of regional importance of possibilistic and probabilistic uncertainty in stability analysis. Computers and Geotechnics, 61, 308-315.

  8. Time- and temperature-dependent failures of a bonded joint

    NASA Astrophysics Data System (ADS)

    Sihn, Sangwook

    This dissertation summarizes my study of time- and temperature-dependent behavior of a tubular lap bonded joint to provide a design methodology for windmill blade structures. The bonded joint is between a cast-iron rod and a GFRP composite pipe. The adhesive material is an epoxy containing chopped glass fibers. We proposed a new fabrication method to make concentric and void-less specimens of the tubular joint with a thick adhesive bondline to stimulate the root bond of a blade. The thick bondline facilitates the joint assembly of actual blades. For a better understanding of the behavior of the bonded joint, we studied viscoelastic behavior of the adhesive materials by measuring creep compliance at several temperatures during loading period. We observed that the creep compliance depends highly on the period of loading and the temperature. We applied time-temperature equivalence to the creep compliance of the adhesive material to obtain time-temperature shift factors. We also performed constant-rate of monotonically increased uniaxial tensile tests to measure static strength of the tubular lap joint at several temperatures and different strain-rates. We observed two failure modes from load-deflection curves and failed specimens. One is the brittle mode, which was caused by weakness of the interfacial strength occurring at low temperature and short period of loading. The other is the ductile mode, which was caused by weakness of the adhesive material at high temperature and long period of loading. Transition from the brittle to the ductile mode appeared as the temperature or the loading period increased. We also performed tests under uniaxial tensile-tensile cyclic loadings to measure fatigue strength of the bonded joint at several temperatures, frequencies and stress ratios. The fatigue data are analyzed statistically by applying the residual strength degradation model to calculate statistical distribution of the fatigue life. Combining the time-temperature equivalence and the residual strength degradation model enables us to estimate the fatigue life of the bonded joint at different load levels, frequencies and temperatures with a certain probability. A numerical example shows how to apply the life estimation method to a structure subjected to a random load history by rainflow cycle counting.

  9. Orthopedic surgery in rheumatoid arthritis in the era of biologic therapy.

    PubMed

    Leon, Leticia; Abasolo, Lydia; Carmona, Loreto; Rodriguez-Rodriguez, Luis; Lamas, Jose Ramon; Hernandez-Garcia, Cesar; Jover, Juan Angel

    2013-11-01

    To analyze sociodemographic and clinic-related factors associated with the use of orthopedic surgical procedures in rheumatoid arthritis (RA), focusing on the potential role of new biologic therapies. A retrospective medical record review was performed in a probability sample of 1272 patients with RA from 47 units distributed in 19 Spanish regions. Sociodemographic and clinical features, use of drugs, and arthritis-related joint surgeries were recorded following a standardized protocol. A total of 94 patients (7.4%) underwent any orthopedic surgery during their disease course, with a total of 114 surgeries; 47 (41.2%) of these surgeries were total joint replacement (TJR). The median time to first orthopedic procedure was 7.9 years from the onset of RA symptoms, and the rate of orthopedic surgery (excluding TJR) was 4.5 procedures per 100 person-years from the beginning of RA, while the rate of TJR was 2.25 interventions per 100 person-years. A higher risk of undergoing an orthopedic surgical procedure was associated with taking nonsteroidal antiinflammatory drugs (NSAID) in the previous 2 years, female sex, longterm disease, and the presence of extraarticular complications. The risk factors for undergoing a TJR were being old, having a longterm disease, and taking biologic therapies. In the era of biologics, our national audit found a low percentage of patients who underwent orthopedic surgery, probably reflecting a thorough management of the RA. Sociodemographic factors, longterm RA, extraarticular complications, and NSAID were associated with orthopedic surgery.

  10. Fatigue Damage Monitoring of a Composite Step Lap Joint Using Distributed Optical Fibre Sensors

    PubMed Central

    Wong, Leslie; Chowdhury, Nabil; Wang, John; Chiu, Wing Kong; Kodikara, Jayantha

    2016-01-01

    Over the past few decades, there has been a considerable interest in the use of distributed optical fibre sensors (DOFS) for structural health monitoring of composite structures. In aerospace-related work, health monitoring of the adhesive joints of composites has become more significant, as they can suffer from cracking and delamination, which can have a significant impact on the integrity of the joint. In this paper, a swept-wavelength interferometry (SWI) based DOFS technique is used to monitor the fatigue in a flush step lap joint composite structure. The presented results will show the potential application of distributed optical fibre sensor for damage detection, as well as monitoring the fatigue crack growth along the bondline of a step lap joint composite structure. The results confirmed that a distributed optical fibre sensor is able to enhance the detection of localised damage in a structure. PMID:28773496

  11. Lamb wave-based damage quantification and probability of detection modeling for fatigue life assessment of riveted lap joint

    NASA Astrophysics Data System (ADS)

    He, Jingjing; Wang, Dengjiang; Zhang, Weifang

    2015-03-01

    This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.

  12. Analyzing multicomponent receptive fields from neural responses to natural stimuli

    PubMed Central

    Rowekamp, Ryan; Sharpee, Tatyana O

    2011-01-01

    The challenge of building increasingly better models of neural responses to natural stimuli is to accurately estimate the multiple stimulus features that may jointly affect the neural spike probability. The selectivity for combinations of features is thought to be crucial for achieving classical properties of neural responses such as contrast invariance. The joint search for these multiple stimulus features is difficult because estimating spike probability as a multidimensional function of stimulus projections onto candidate relevant dimensions is subject to the curse of dimensionality. An attractive alternative is to search for relevant dimensions sequentially, as in projection pursuit regression. Here we demonstrate using analytic arguments and simulations of model cells that different types of sequential search strategies exhibit systematic biases when used with natural stimuli. Simulations show that joint optimization is feasible for up to three dimensions with current algorithms. When applied to the responses of V1 neurons to natural scenes, models based on three jointly optimized dimensions had better predictive power in a majority of cases compared to dimensions optimized sequentially, with different sequential methods yielding comparable results. Thus, although the curse of dimensionality remains, at least several relevant dimensions can be estimated by joint information maximization. PMID:21780916

  13. JDINAC: joint density-based non-parametric differential interaction network analysis and classification using high-dimensional sparse omics data.

    PubMed

    Ji, Jiadong; He, Di; Feng, Yang; He, Yong; Xue, Fuzhong; Xie, Lei

    2017-10-01

    A complex disease is usually driven by a number of genes interwoven into networks, rather than a single gene product. Network comparison or differential network analysis has become an important means of revealing the underlying mechanism of pathogenesis and identifying clinical biomarkers for disease classification. Most studies, however, are limited to network correlations that mainly capture the linear relationship among genes, or rely on the assumption of a parametric probability distribution of gene measurements. They are restrictive in real application. We propose a new Joint density based non-parametric Differential Interaction Network Analysis and Classification (JDINAC) method to identify differential interaction patterns of network activation between two groups. At the same time, JDINAC uses the network biomarkers to build a classification model. The novelty of JDINAC lies in its potential to capture non-linear relations between molecular interactions using high-dimensional sparse data as well as to adjust confounding factors, without the need of the assumption of a parametric probability distribution of gene measurements. Simulation studies demonstrate that JDINAC provides more accurate differential network estimation and lower classification error than that achieved by other state-of-the-art methods. We apply JDINAC to a Breast Invasive Carcinoma dataset, which includes 114 patients who have both tumor and matched normal samples. The hub genes and differential interaction patterns identified were consistent with existing experimental studies. Furthermore, JDINAC discriminated the tumor and normal sample with high accuracy by virtue of the identified biomarkers. JDINAC provides a general framework for feature selection and classification using high-dimensional sparse omics data. R scripts available at https://github.com/jijiadong/JDINAC. lxie@iscb.org. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  14. An information-theoretic approach to the modeling and analysis of whole-genome bisulfite sequencing data.

    PubMed

    Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John

    2018-03-07

    DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of quantifying methylation stochasticity using concepts from information theory. By employing this methodology, substantial improvement of DNA methylation analysis can be achieved by effectively taking into account the massive amount of statistical information available in WGBS data, which is largely ignored by existing methods.

  15. Modeling Compound Flood Hazards in Coastal Embayments

    NASA Astrophysics Data System (ADS)

    Moftakhari, H.; Schubert, J. E.; AghaKouchak, A.; Luke, A.; Matthew, R.; Sanders, B. F.

    2017-12-01

    Coastal cities around the world are built on lowland topography adjacent to coastal embayments and river estuaries, where multiple factors threaten increasing flood hazards (e.g. sea level rise and river flooding). Quantitative risk assessment is required for administration of flood insurance programs and the design of cost-effective flood risk reduction measures. This demands a characterization of extreme water levels such as 100 and 500 year return period events. Furthermore, hydrodynamic flood models are routinely used to characterize localized flood level intensities (i.e., local depth and velocity) based on boundary forcing sampled from extreme value distributions. For example, extreme flood discharges in the U.S. are estimated from measured flood peaks using the Log-Pearson Type III distribution. However, configuring hydrodynamic models for coastal embayments is challenging because of compound extreme flood events: events caused by a combination of extreme sea levels, extreme river discharges, and possibly other factors such as extreme waves and precipitation causing pluvial flooding in urban developments. Here, we present an approach for flood risk assessment that coordinates multivariate extreme analysis with hydrodynamic modeling of coastal embayments. First, we evaluate the significance of correlation structure between terrestrial freshwater inflow and oceanic variables; second, this correlation structure is described using copula functions in unit joint probability domain; and third, we choose a series of compound design scenarios for hydrodynamic modeling based on their occurrence likelihood. The design scenarios include the most likely compound event (with the highest joint probability density), preferred marginal scenario and reproduced time series of ensembles based on Monte Carlo sampling of bivariate hazard domain. The comparison between resulting extreme water dynamics under the compound hazard scenarios explained above provides an insight to the strengths/weaknesses of each approach and helps modelers choose the appropriate scenario that best fit to the needs of their project. The proposed risk assessment approach can help flood hazard modeling practitioners achieve a more reliable estimate of risk, by cautiously reducing the dimensionality of the hazard analysis.

  16. Joint inversion of NMR and SIP data to estimate pore size distribution of geomaterials

    NASA Astrophysics Data System (ADS)

    Niu, Qifei; Zhang, Chi

    2018-03-01

    There are growing interests in using geophysical tools to characterize the microstructure of geomaterials because of the non-invasive nature and the applicability in field. In these applications, multiple types of geophysical data sets are usually processed separately, which may be inadequate to constrain the key feature of target variables. Therefore, simultaneous processing of multiple data sets could potentially improve the resolution. In this study, we propose a method to estimate pore size distribution by joint inversion of nuclear magnetic resonance (NMR) T2 relaxation and spectral induced polarization (SIP) spectra. The petrophysical relation between NMR T2 relaxation time and SIP relaxation time is incorporated in a nonlinear least squares problem formulation, which is solved using Gauss-Newton method. The joint inversion scheme is applied to a synthetic sample and a Berea sandstone sample. The jointly estimated pore size distributions are very close to the true model and results from other experimental method. Even when the knowledge of the petrophysical models of the sample is incomplete, the joint inversion can still capture the main features of the pore size distribution of the samples, including the general shape and relative peak positions of the distribution curves. It is also found from the numerical example that the surface relaxivity of the sample could be extracted with the joint inversion of NMR and SIP data if the diffusion coefficient of the ions in the electrical double layer is known. Comparing to individual inversions, the joint inversion could improve the resolution of the estimated pore size distribution because of the addition of extra data sets. The proposed approach might constitute a first step towards a comprehensive joint inversion that can extract the full pore geometry information of a geomaterial from NMR and SIP data.

  17. Response statistics of rotating shaft with non-linear elastic restoring forces by path integration

    NASA Astrophysics Data System (ADS)

    Gaidai, Oleg; Naess, Arvid; Dimentberg, Michael

    2017-07-01

    Extreme statistics of random vibrations is studied for a Jeffcott rotor under uniaxial white noise excitation. Restoring force is modelled as elastic non-linear; comparison is done with linearized restoring force to see the force non-linearity effect on the response statistics. While for the linear model analytical solutions and stability conditions are available, it is not generally the case for non-linear system except for some special cases. The statistics of non-linear case is studied by applying path integration (PI) method, which is based on the Markov property of the coupled dynamic system. The Jeffcott rotor response statistics can be obtained by solving the Fokker-Planck (FP) equation of the 4D dynamic system. An efficient implementation of PI algorithm is applied, namely fast Fourier transform (FFT) is used to simulate dynamic system additive noise. The latter allows significantly reduce computational time, compared to the classical PI. Excitation is modelled as Gaussian white noise, however any kind distributed white noise can be implemented with the same PI technique. Also multidirectional Markov noise can be modelled with PI in the same way as unidirectional. PI is accelerated by using Monte Carlo (MC) estimated joint probability density function (PDF) as initial input. Symmetry of dynamic system was utilized to afford higher mesh resolution. Both internal (rotating) and external damping are included in mechanical model of the rotor. The main advantage of using PI rather than MC is that PI offers high accuracy in the probability distribution tail. The latter is of critical importance for e.g. extreme value statistics, system reliability, and first passage probability.

  18. Probabilistic assessment of landslide tsunami hazard for the northern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Pampell-Manis, A.; Horrillo, J.; Shigihara, Y.; Parambath, L.

    2016-01-01

    The devastating consequences of recent tsunamis affecting Indonesia and Japan have prompted a scientific response to better assess unexpected tsunami hazards. Although much uncertainty exists regarding the recurrence of large-scale tsunami events in the Gulf of Mexico (GoM), geological evidence indicates that a tsunami is possible and would most likely come from a submarine landslide triggered by an earthquake. This study customizes for the GoM a first-order probabilistic landslide tsunami hazard assessment. Monte Carlo Simulation (MCS) is employed to determine landslide configurations based on distributions obtained from observational submarine mass failure (SMF) data. Our MCS approach incorporates a Cholesky decomposition method for correlated landslide size parameters to capture correlations seen in the data as well as uncertainty inherent in these events. Slope stability analyses are performed using landslide and sediment properties and regional seismic loading to determine landslide configurations which fail and produce a tsunami. The probability of each tsunamigenic failure is calculated based on the joint probability of slope failure and probability of the triggering earthquake. We are thus able to estimate sizes and return periods for probabilistic maximum credible landslide scenarios. We find that the Cholesky decomposition approach generates landslide parameter distributions that retain the trends seen in observational data, improving the statistical validity and relevancy of the MCS technique in the context of landslide tsunami hazard assessment. Estimated return periods suggest that probabilistic maximum credible SMF events in the north and northwest GoM have a recurrence of 5000-8000 years, in agreement with age dates of observed deposits.

  19. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach

    PubMed Central

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2018-01-01

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591

  20. ADS-B within a Multi-Aircraft Simulation for Distributed Air-Ground Traffic Management

    NASA Technical Reports Server (NTRS)

    Barhydt, Richard; Palmer, Michael T.; Chung, William W.; Loveness, Ghyrn W.

    2004-01-01

    Automatic Dependent Surveillance Broadcast (ADS-B) is an enabling technology for NASA s Distributed Air-Ground Traffic Management (DAG-TM) concept. DAG-TM has the goal of significantly increasing capacity within the National Airspace System, while maintaining or improving safety. Under DAG-TM, aircraft exchange state and intent information over ADS-B with other aircraft and ground stations. This information supports various surveillance functions including conflict detection and resolution, scheduling, and conformance monitoring. To conduct more rigorous concept feasibility studies, NASA Langley Research Center s PC-based Air Traffic Operations Simulation models a 1090 MHz ADS-B communication structure, based on industry standards for message content, range, and reception probability. The current ADS-B model reflects a mature operating environment and message interference effects are limited to Mode S transponder replies and ADS-B squitters. This model was recently evaluated in a Joint DAG-TM Air/Ground Coordination Experiment with NASA Ames Research Center. Message probability of reception vs. range was lower at higher traffic levels. The highest message collision probability occurred near the meter fix serving as the confluence for two arrival streams. Even the highest traffic level encountered in the experiment was significantly less than the industry standard "LA Basin 2020" scenario. Future studies will account for Mode A and C message interference (a major effect in several industry studies) and will include Mode A and C aircraft in the simulation, thereby increasing the total traffic level. These changes will support ongoing enhancements to separation assurance functions that focus on accommodating longer ADS-B information update intervals.

  1. Bayesian network representing system dynamics in risk analysis of nuclear systems

    NASA Astrophysics Data System (ADS)

    Varuttamaseni, Athi

    2011-12-01

    A dynamic Bayesian network (DBN) model is used in conjunction with the alternating conditional expectation (ACE) regression method to analyze the risk associated with the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed operation in the Zion-1 nuclear power plant. The use of the DBN allows the joint probability distribution to be factorized, enabling the analysis to be done on many simpler network structures rather than on one complicated structure. The construction of the DBN model assumes conditional independence relations among certain key reactor parameters. The choice of parameter to model is based on considerations of the macroscopic balance statements governing the behavior of the reactor under a quasi-static assumption. The DBN is used to relate the peak clad temperature to a set of independent variables that are known to be important in determining the success of the feed and bleed operation. A simple linear relationship is then used to relate the clad temperature to the core damage probability. To obtain a quantitative relationship among different nodes in the DBN, surrogates of the RELAP5 reactor transient analysis code are used. These surrogates are generated by applying the ACE algorithm to output data obtained from about 50 RELAP5 cases covering a wide range of the selected independent variables. These surrogates allow important safety parameters such as the fuel clad temperature to be expressed as a function of key reactor parameters such as the coolant temperature and pressure together with important independent variables such as the scram delay time. The time-dependent core damage probability is calculated by sampling the independent variables from their probability distributions and propagate the information up through the Bayesian network to give the clad temperature. With the knowledge of the clad temperature and the assumption that the core damage probability has a one-to-one relationship to it, we have calculated the core damage probably as a function of transient time. The use of the DBN model in combination with ACE allows risk analysis to be performed with much less effort than if the analysis were done using the standard techniques.

  2. On the Statistical Analysis of X-ray Polarization Measurements

    NASA Technical Reports Server (NTRS)

    Strohmayer, T. E.; Kallman, T. R.

    2013-01-01

    In many polarimetry applications, including observations in the X-ray band, the measurement of a polarization signal can be reduced to the detection and quantification of a deviation from uniformity of a distribution of measured angles of the form alpha plus beta cosine (exp 2)(phi - phi(sub 0) (0 (is) less than phi is less than pi). We explore the statistics of such polarization measurements using both Monte Carlo simulations as well as analytic calculations based on the appropriate probability distributions. We derive relations for the number of counts required to reach a given detection level (parameterized by beta the "number of sigma's" of the measurement) appropriate for measuring the modulation amplitude alpha by itself (single interesting parameter case) or jointly with the position angle phi (two interesting parameters case). We show that for the former case when the intrinsic amplitude is equal to the well known minimum detectable polarization (MDP) it is, on average, detected at the 3sigma level. For the latter case, when one requires a joint measurement at the same confidence level, then more counts are needed, by a factor of approximately equal to 2.2, than that required to achieve the MDP level. We find that the position angle uncertainty at 1sigma confidence is well described by the relation sigma(sub pi) equals 28.5(degrees) divided by beta.

  3. PARTS: Probabilistic Alignment for RNA joinT Secondary structure prediction

    PubMed Central

    Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H.

    2008-01-01

    A novel method is presented for joint prediction of alignment and common secondary structures of two RNA sequences. The joint consideration of common secondary structures and alignment is accomplished by structural alignment over a search space defined by the newly introduced motif called matched helical regions. The matched helical region formulation generalizes previously employed constraints for structural alignment and thereby better accommodates the structural variability within RNA families. A probabilistic model based on pseudo free energies obtained from precomputed base pairing and alignment probabilities is utilized for scoring structural alignments. Maximum a posteriori (MAP) common secondary structures, sequence alignment and joint posterior probabilities of base pairing are obtained from the model via a dynamic programming algorithm called PARTS. The advantage of the more general structural alignment of PARTS is seen in secondary structure predictions for the RNase P family. For this family, the PARTS MAP predictions of secondary structures and alignment perform significantly better than prior methods that utilize a more restrictive structural alignment model. For the tRNA and 5S rRNA families, the richer structural alignment model of PARTS does not offer a benefit and the method therefore performs comparably with existing alternatives. For all RNA families studied, the posterior probability estimates obtained from PARTS offer an improvement over posterior probability estimates from a single sequence prediction. When considering the base pairings predicted over a threshold value of confidence, the combination of sensitivity and positive predictive value is superior for PARTS than for the single sequence prediction. PARTS source code is available for download under the GNU public license at http://rna.urmc.rochester.edu. PMID:18304945

  4. Effects on Subtalar Joint Stress Distribution After Cannulated Screw Insertion at Different Positions and Directions.

    PubMed

    Yuan, Cheng-song; Chen, Wan; Chen, Chen; Yang, Guang-hua; Hu, Chao; Tang, Kang-lai

    2015-01-01

    We investigated the effects on subtalar joint stress distribution after cannulated screw insertion at different positions and directions. After establishing a 3-dimensional geometric model of a normal subtalar joint, we analyzed the most ideal cannulated screw insertion position and approach for subtalar joint stress distribution and compared the differences in loading stress, antirotary strength, and anti-inversion/eversion strength among lateral-medial antiparallel screw insertion, traditional screw insertion, and ideal cannulated screw insertion. The screw insertion approach allowing the most uniform subtalar joint loading stress distribution was lateral screw insertion near the border of the talar neck plus medial screw insertion close to the ankle joint. For stress distribution uniformity, antirotary strength, and anti-inversion/eversion strength, lateral-medial antiparallel screw insertion was superior to traditional double-screw insertion. Compared with ideal cannulated screw insertion, slightly poorer stress distribution uniformity and better antirotary strength and anti-inversion/eversion strength were observed for lateral-medial antiparallel screw insertion. Traditional single-screw insertion was better than double-screw insertion for stress distribution uniformity but worse for anti-rotary strength and anti-inversion/eversion strength. Lateral-medial antiparallel screw insertion was slightly worse for stress distribution uniformity than was ideal cannulated screw insertion but superior to traditional screw insertion. It was better than both ideal cannulated screw insertion and traditional screw insertion for anti-rotary strength and anti-inversion/eversion strength. Lateral-medial antiparallel screw insertion is an approach with simple localization, convenient operation, and good safety. Copyright © 2015 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  5. Marginal and joint distributions of S100, HMB-45, and Melan-A across a large series of cutaneous melanomas.

    PubMed

    Viray, Hollis; Bradley, William R; Schalper, Kurt A; Rimm, David L; Gould Rothberg, Bonnie E

    2013-08-01

    The distribution of the standard melanoma antibodies S100, HMB-45, and Melan-A has been extensively studied. Yet, the overlap in their expression is less well characterized. To determine the joint distributions of the classic melanoma markers and to determine if classification according to joint antigen expression has prognostic relevance. S100, HMB-45, and Melan-A were assayed by immunofluorescence-based immunohistochemistry on a large tissue microarray of 212 cutaneous melanoma primary tumors and 341 metastases. Positive expression for each antigen required display of immunoreactivity for at least 25% of melanoma cells. Marginal and joint distributions were determined across all markers. Bivariate associations with established clinicopathologic covariates and melanoma-specific survival analyses were conducted. Of 322 assayable melanomas, 295 (91.6%), 203 (63.0%), and 236 (73.3%) stained with S100, HMB-45, and Melan-A, respectively. Twenty-seven melanomas, representing a diverse set of histopathologic profiles, were S100 negative. Coexpression of all 3 antibodies was observed in 160 melanomas (49.7%). Intensity of endogenous melanin pigment did not confound immunolabeling. Among primary tumors, associations with clinicopathologic parameters revealed a significant relationship only between HMB-45 and microsatellitosis (P = .02). No significant differences among clinicopathologic criteria were observed across the HMB-45/Melan-A joint distribution categories. Neither marginal HMB-45 (P = .56) nor Melan-A (P = .81), or their joint distributions (P = .88), was associated with melanoma-specific survival. Comprehensive characterization of the marginal and joint distributions for S100, HMB-45, and Melan-A across a large series of cutaneous melanomas revealed diversity of expression across this group of antigens. However, these immunohistochemically defined subclasses of melanomas do not significantly differ according to clinicopathologic correlates or outcome.

  6. Probabilistic image modeling with an extended chain graph for human activity recognition and image segmentation.

    PubMed

    Zhang, Lei; Zeng, Zhi; Ji, Qiang

    2011-09-01

    Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.

  7. Markov Chain Monte Carlo Joint Analysis of Chandra X-Ray Imaging Spectroscopy and Sunyaev-Zel'dovich Effect Data

    NASA Technical Reports Server (NTRS)

    Bonamente, Massimillano; Joy, Marshall K.; Carlstrom, John E.; Reese, Erik D.; LaRoque, Samuel J.

    2004-01-01

    X-ray and Sunyaev-Zel'dovich effect data can be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from Chandra, which provides both spatial and spectral information, and Sunyaev-Zel'dovich effect data were obtained from the BIMA and Owens Valley Radio Observatory (OVRO) arrays. We introduce a Markov Chain Monte Carlo procedure for the joint analysis of X-ray and Sunyaev- Zel'dovich effect data. The advantages of this method are the high computational efficiency and the ability to measure simultaneously the probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and also for derivative quantities such as the distance to the cluster. We demonstrate this technique by applying it to the Chandra X-ray data and the OVRO radio data for the galaxy cluster A611. Comparisons with traditional likelihood ratio methods reveal the robustness of the method. This method will be used in follow-up paper to determine the distances to a large sample of galaxy cluster.

  8. Seabed roughness parameters from joint backscatter and reflection inversion at the Malta Plateau.

    PubMed

    Steininger, Gavin; Holland, Charles W; Dosso, Stan E; Dettmer, Jan

    2013-09-01

    This paper presents estimates of seabed roughness and geoacoustic parameters and uncertainties on the Malta Plateau, Mediterranean Sea, by joint Bayesian inversion of mono-static backscatter and spherical wave reflection-coefficient data. The data are modeled using homogeneous fluid sediment layers overlying an elastic basement. The scattering model assumes a randomly rough water-sediment interface with a von Karman roughness power spectrum. Scattering and reflection data are inverted simultaneously using a population of interacting Markov chains to sample roughness and geoacoustic parameters as well as residual error parameters. Trans-dimensional sampling is applied to treat the number of sediment layers and the order (zeroth or first) of an autoregressive error model (to represent potential residual correlation) as unknowns. Results are considered in terms of marginal posterior probability profiles and distributions, which quantify the effective data information content to resolve scattering/geoacoustic structure. Results indicate well-defined scattering (roughness) parameters in good agreement with existing measurements, and a multi-layer sediment profile over a high-speed (elastic) basement, consistent with independent knowledge of sand layers over limestone.

  9. Order statistics applied to the most massive and most distant galaxy clusters

    NASA Astrophysics Data System (ADS)

    Waizmann, J.-C.; Ettori, S.; Bartelmann, M.

    2013-06-01

    In this work, we present an analytic framework for calculating the individual and joint distributions of the nth most massive or nth highest redshift galaxy cluster for a given survey characteristic allowing us to formulate Λ cold dark matter (ΛCDM) exclusion criteria. We show that the cumulative distribution functions steepen with increasing order, giving them a higher constraining power with respect to the extreme value statistics. Additionally, we find that the order statistics in mass (being dominated by clusters at lower redshifts) is sensitive to the matter density and the normalization of the matter fluctuations, whereas the order statistics in redshift is particularly sensitive to the geometric evolution of the Universe. For a fixed cosmology, both order statistics are efficient probes of the functional shape of the mass function at the high-mass end. To allow a quick assessment of both order statistics, we provide fits as a function of the survey area that allow percentile estimation with an accuracy better than 2 per cent. Furthermore, we discuss the joint distributions in the two-dimensional case and find that for the combination of the largest and the second largest observation, it is most likely to find them to be realized with similar values with a broadly peaked distribution. When combining the largest observation with higher orders, it is more likely to find a larger gap between the observations and when combining higher orders in general, the joint probability density function peaks more strongly. Having introduced the theory, we apply the order statistical analysis to the Southpole Telescope (SPT) massive cluster sample and metacatalogue of X-ray detected clusters of galaxies catalogue and find that the 10 most massive clusters in the sample are consistent with ΛCDM and the Tinker mass function. For the order statistics in redshift, we find a discrepancy between the data and the theoretical distributions, which could in principle indicate a deviation from the standard cosmology. However, we attribute this deviation to the uncertainty in the modelling of the SPT survey selection function. In turn, by assuming the ΛCDM reference cosmology, order statistics can also be utilized for consistency checks of the completeness of the observed sample and of the modelling of the survey selection function.

  10. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion

    PubMed Central

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-01-01

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278

  11. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.

    PubMed

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-08-31

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy.

  12. Probabilistic Simulation of Progressive Fracture in Bolted-Joint Composite Laminates

    NASA Technical Reports Server (NTRS)

    Minnetyan, L.; Singhal, S. N.; Chamis, C. C.

    1996-01-01

    This report describes computational methods to probabilistically simulate fracture in bolted composite structures. An innovative approach that is independent of stress intensity factors and fracture toughness was used to simulate progressive fracture. The effect of design variable uncertainties on structural damage was also quantified. A fast probability integrator assessed the scatter in the composite structure response before and after damage. Then the sensitivity of the response to design variables was computed. General-purpose methods, which are applicable to bolted joints in all types of structures and in all fracture processes-from damage initiation to unstable propagation and global structure collapse-were used. These methods were demonstrated for a bolted joint of a polymer matrix composite panel under edge loads. The effects of the fabrication process were included in the simulation of damage in the bolted panel. Results showed that the most effective way to reduce end displacement at fracture is to control both the load and the ply thickness. The cumulative probability for longitudinal stress in all plies was most sensitive to the load; in the 0 deg. plies it was very sensitive to ply thickness. The cumulative probability for transverse stress was most sensitive to the matrix coefficient of thermal expansion. In addition, fiber volume ratio and fiber transverse modulus both contributed significantly to the cumulative probability for the transverse stresses in all the plies.

  13. The impact of joint responses of devices in an airport security system.

    PubMed

    Nie, Xiaofeng; Batta, Rajan; Drury, Colin G; Lin, Li

    2009-02-01

    In this article, we consider a model for an airport security system in which the declaration of a threat is based on the joint responses of inspection devices. This is in contrast to the typical system in which each check station independently declares a passenger as having a threat or not having a threat. In our framework the declaration of threat/no-threat is based upon the passenger scores at the check stations he/she goes through. To do this we use concepts from classification theory in the field of multivariate statistics analysis and focus on the main objective of minimizing the expected cost of misclassification. The corresponding correct classification and misclassification probabilities can be obtained by using a simulation-based method. After computing the overall false alarm and false clear probabilities, we compare our joint response system with two other independently operated systems. A model that groups passengers in a manner that minimizes the false alarm probability while maintaining the false clear probability within specifications set by a security authority is considered. We also analyze the staffing needs at each check station for such an inspection scheme. An illustrative example is provided along with sensitivity analysis on key model parameters. A discussion is provided on some implementation issues, on the various assumptions made in the analysis, and on potential drawbacks of the approach.

  14. The distribution of the intervals between neural impulses in the maintained discharges of retinal ganglion cells.

    PubMed

    Levine, M W

    1991-01-01

    Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)

  15. Multivariate probability distribution for sewer system vulnerability assessment under data-limited conditions.

    PubMed

    Del Giudice, G; Padulano, R; Siciliano, D

    2016-01-01

    The lack of geometrical and hydraulic information about sewer networks often excludes the adoption of in-deep modeling tools to obtain prioritization strategies for funds management. The present paper describes a novel statistical procedure for defining the prioritization scheme for preventive maintenance strategies based on a small sample of failure data collected by the Sewer Office of the Municipality of Naples (IT). Novelty issues involve, among others, considering sewer parameters as continuous statistical variables and accounting for their interdependences. After a statistical analysis of maintenance interventions, the most important available factors affecting the process are selected and their mutual correlations identified. Then, after a Box-Cox transformation of the original variables, a methodology is provided for the evaluation of a vulnerability map of the sewer network by adopting a joint multivariate normal distribution with different parameter sets. The goodness-of-fit is eventually tested for each distribution by means of a multivariate plotting position. The developed methodology is expected to assist municipal engineers in identifying critical sewers, prioritizing sewer inspections in order to fulfill rehabilitation requirements.

  16. Overview of DAN/MSL water and chlorine measurements acquired in Gale area for four years of surface observations

    NASA Astrophysics Data System (ADS)

    Litvak, Maxim

    2017-04-01

    During more than 4 years MSL Curiosity rover (landed in Gale crater in August 2012) is traveling toward sedimentary layered mound deposited with phyllosilicates and hematite hydrated minerals. Curiosity already traversed more than 14 km and identified lacustrine deposits left from ancient lakes filled Gale area in early history of Mars. Along the traverse the Curiosity rover discovered unique signatures regarding how the Mars environment changed from ancient warm and wet conditions and probably habitable environment to the modern cold and dry climate. We have summarized numerous measurements from the Dynamic Albedo of Neutron (DAN) instrument on Curiosity rover to overview variations of subsurface bound water distribution from the wet to the dry locations, compared it with other MSL measurements and with possible distribution of hydrated minerals and sequence of geological units travelled by Curiosity. We have also performed joint analysis of water and chlorine distributions and compared bulk (down to 0.5 m depth) equivalent chlorine concentrations measured by DAN throughout the Gale area and APXS observations of corresponding local surface targets and drill fines.

  17. Mean-field equations for neuronal networks with arbitrary degree distributions.

    PubMed

    Nykamp, Duane Q; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex

    2017-04-01

    The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.

  18. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  19. New scaling model for variables and increments with heavy-tailed distributions

    NASA Astrophysics Data System (ADS)

    Riva, Monica; Neuman, Shlomo P.; Guadagnini, Alberto

    2015-06-01

    Many hydrological (as well as diverse earth, environmental, ecological, biological, physical, social, financial and other) variables, Y, exhibit frequency distributions that are difficult to reconcile with those of their spatial or temporal increments, ΔY. Whereas distributions of Y (or its logarithm) are at times slightly asymmetric with relatively mild peaks and tails, those of ΔY tend to be symmetric with peaks that grow sharper, and tails that become heavier, as the separation distance (lag) between pairs of Y values decreases. No statistical model known to us captures these behaviors of Y and ΔY in a unified and consistent manner. We propose a new, generalized sub-Gaussian model that does so. We derive analytical expressions for probability distribution functions (pdfs) of Y and ΔY as well as corresponding lead statistical moments. In our model the peak and tails of the ΔY pdf scale with lag in line with observed behavior. The model allows one to estimate, accurately and efficiently, all relevant parameters by analyzing jointly sample moments of Y and ΔY. We illustrate key features of our new model and method of inference on synthetically generated samples and neutron porosity data from a deep borehole.

  20. Mean-field equations for neuronal networks with arbitrary degree distributions

    NASA Astrophysics Data System (ADS)

    Nykamp, Duane Q.; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex

    2017-04-01

    The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.

  1. FDR doesn't Tell the Whole Story: Joint Influence of Effect Size and Covariance Structure on the Distribution of the False Discovery Proportions

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Ploutz-Snyder, Robert; Fiedler, James

    2011-01-01

    As part of a 2009 Annals of Statistics paper, Gavrilov, Benjamini, and Sarkar report results of simulations that estimated the false discovery rate (FDR) for equally correlated test statistics using a well-known multiple-test procedure. In our study we estimate the distribution of the false discovery proportion (FDP) for the same procedure under a variety of correlation structures among multiple dependent variables in a MANOVA context. Specifically, we study the mean (the FDR), skewness, kurtosis, and percentiles of the FDP distribution in the case of multiple comparisons that give rise to correlated non-central t-statistics when results at several time periods are being compared to baseline. Even if the FDR achieves its nominal value, other aspects of the distribution of the FDP depend on the interaction between signed effect sizes and correlations among variables, proportion of true nulls, and number of dependent variables. We show examples where the mean FDP (the FDR) is 10% as designed, yet there is a surprising probability of having 30% or more false discoveries. Thus, in a real experiment, the proportion of false discoveries could be quite different from the stipulated FDR.

  2. HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA

    EPA Science Inventory

    Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...

  3. Interleaved Training and Training-Based Transmission Design for Hybrid Massive Antenna Downlink

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Jing, Yindi; Huang, Yongming; Yang, Luxi

    2018-06-01

    In this paper, we study the beam-based training design jointly with the transmission design for hybrid massive antenna single-user (SU) and multiple-user (MU) systems where outage probability is adopted as the performance measure. For SU systems, we propose an interleaved training design to concatenate the feedback and training procedures, thus making the training length adaptive to the channel realization. Exact analytical expressions are derived for the average training length and the outage probability of the proposed interleaved training. For MU systems, we propose a joint design for the beam-based interleaved training, beam assignment, and MU data transmissions. Two solutions for the beam assignment are provided with different complexity-performance tradeoff. Analytical results and simulations show that for both SU and MU systems, the proposed joint training and transmission designs achieve the same outage performance as the traditional full-training scheme but with significant saving in the training overhead.

  4. Models of multidimensional discrete distribution of probabilities of random variables in information systems

    NASA Astrophysics Data System (ADS)

    Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.

    2018-03-01

    Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.

  5. Push-off tests and strength evaluation of joints combining shrink fitting with bonding

    NASA Astrophysics Data System (ADS)

    Yoneno, Masahiro; Sawa, Toshiyuki; Shimotakahara, Ken; Motegi, Yoichi

    1997-03-01

    Shrink fitted joints have been used in mechanical structures. Recently, joints combining shrink fitting with anaerobic adhesives bonded between the shrink fitted surfaces have been appeared in order to increase the joint strength. In this paper, push-off test was carried out on strength of joints combining shrink fitting with bonding by material testing machine. In addition, the push-off strength of shrink fitting joints without an anaerobic adhesive was also measured. In the experiments, the effects of the shrinking allowance and the outer diameter of the rings on the joint strength are examined. The interface stress distribution in bonded shrink fitted joints subjected to a push-off load is analyzed using axisymmetrical theory of elasticity as a four-body contact problem. Using the interface stress distribution, a method for estimating joint strength is proposed. The experimental results are in a fairly good agreement with the numerical results. It is found that the strength of combination joints is greater than that of shrink fitted joints.

  6. The World According to de Finetti: On de Finetti's Theory of Probability and Its Application to Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Berkovitz, Joseph

    Bruno de Finetti is one of the founding fathers of the subjectivist school of probability, where probabilities are interpreted as rational degrees of belief. His work on the relation between the theorems of probability and rationality is among the corner stones of modern subjective probability theory. De Finetti maintained that rationality requires that degrees of belief be coherent, and he argued that the whole of probability theory could be derived from these coherence conditions. De Finetti's interpretation of probability has been highly influential in science. This paper focuses on the application of this interpretation to quantum mechanics. We argue that de Finetti held that the coherence conditions of degrees of belief in events depend on their verifiability. Accordingly, the standard coherence conditions of degrees of belief that are familiar from the literature on subjective probability only apply to degrees of belief in events which could (in principle) be jointly verified; and the coherence conditions of degrees of belief in events that cannot be jointly verified are weaker. While the most obvious explanation of de Finetti's verificationism is the influence of positivism, we argue that it could be motivated by the radical subjectivist and instrumental nature of probability in his interpretation; for as it turns out, in this interpretation it is difficult to make sense of the idea of coherent degrees of belief in, and accordingly probabilities of unverifiable events. We then consider the application of this interpretation to quantum mechanics, concentrating on the Einstein-Podolsky-Rosen experiment and Bell's theorem.

  7. Independent Events in Elementary Probability Theory

    ERIC Educational Resources Information Center

    Csenki, Attila

    2011-01-01

    In Probability and Statistics taught to mathematicians as a first introduction or to a non-mathematical audience, joint independence of events is introduced by requiring that the multiplication rule is satisfied. The following statement is usually tacitly assumed to hold (and, at best, intuitively motivated): If the n events E[subscript 1],…

  8. "Combined Diagnostic Tool" APPlication to a Retrospective Series of Patients Undergoing Total Joint Revision Surgery.

    PubMed

    Gallazzi, Enrico; Drago, Lorenzo; Baldini, Andrea; Stockley, Ian; George, David A; Scarponi, Sara; Romanò, Carlo L

    2017-01-01

    Background : Differentiating between septic and aseptic joint prosthesis may be challenging, since no single test is able to confirm or rule out infection. The choice and interpretation of the panel of tests performed in any case often relies on empirical evaluation and poorly validated scores. The "Combined Diagnostic Tool (CDT)" App, a smartphone application for iOS, was developed to allow to automatically calculate the probability of having a of periprosthetic joint infection, on the basis of the relative sensitivity and specificity of the positive and negative diagnostic tests performed in any given patient. Objective : The aim of the present study was to apply the CDT software to investigate the ability of the tests routinely performed in three high-volume European centers to diagnose a periprosthetic infection. Methods : This three-center retrospective study included 120 consecutive patients undergoing total hip or knee revision, and included 65 infected patients (Group A) and 55 patients without infection (Group B). The following parameters were evaluated: number and type of positive and negative diagnostic tests performed pre-, intra- and post-operatively and resultant probability calculated by the CDT App of having a peri-prosthetic joint infection, based on pre-, intra- and post-operative combined tests. Results : Serological tests were the most common performed, with an average 2.7 tests per patient for Group A and 2.2 for Group B, followed by joint aspiration (0.9 and 0.8 tests per patient, respectively) and imaging techniques (0.5 and 0.2 test per patient). Mean CDT App calculated probability of having an infection based on pre-operative tests was 79.4% for patients in Group A and 35.7 in Group B. Twenty-nine patients in Group A had > 10% chance of not having an infection, and 29 of Group B had > 10% chance of having an infection. Conclusion : This is the first retrospective study focused on investigating the number and type of tests commonly performed prior to joint revision surgery and aimed at evaluating their combined ability to diagnose a peri-prosthetic infection. CDT App allowed us to demonstrate that, on average, the routine combination of commonly used tests is unable to diagnose pre-operatively a peri-prosthetic infection with a probability higher than 90%.

  9. On the detection of very high redshift gamma-ray bursts with Swift

    NASA Astrophysics Data System (ADS)

    Salvaterra, R.; Campana, S.; Chincarini, G.; Tagliaferri, G.; Covino, S.

    2007-09-01

    We compute the probability of detecting long gamma-ray bursts (GRBs) at z >= 5 with Swift, assuming that GRBs form preferentially in low-metallicity environments. The model fits both the observed Burst and Transient Source Experiment (BATSE) and Swift GRB differential peak flux distributions well and is consistent with the number of z >= 2.5 detections in the 2-yr Swift data. We find that the probability of observing a burst at z >= 5 becomes larger than 10 per cent for photon fluxes P < 1 ph s-1 cm-2, consistent with the number of confirmed detections. The corresponding fraction of z >= 5 bursts in the Swift catalogue is ~10-30 per cent depending on the adopted metallicity threshold for GRB formation. We propose to use the computed probability as a tool to identify high-redshift GRBs. By jointly considering promptly available information provided by Swift and model results, we can select reliable z >= 5 candidates in a few hours from the BAT detection. We test the procedure against last year Swift data: only three bursts match all our requirements, two being confirmed at z >= 5. Another three possible candidates are picked up by slightly relaxing the adopted criteria. No low-z interloper is found among the six candidates.

  10. Mechanical and interfacial characterization of laser welded Co-Cr alloy with different joint configurations

    PubMed Central

    Kokolis, John; Chakmakchi, Makdad; Theocharopoulos, Antonios; Prombonas, Anthony

    2015-01-01

    PURPOSE The mechanical and interfacial characterization of laser welded Co-Cr alloy with two different joint designs. MATERIALS AND METHODS Dumbbell cast specimens (n=30) were divided into 3 groups (R, I, K, n=10). Group R consisted of intact specimens, group I of specimens sectioned with a straight cut, and group K of specimens with a 45° bevel made at the one welding edge. The microstructure and the elemental distributions of alloy and welding regions were examined by an SEM/EDX analysis and then specimens were loaded in tension up to fracture. The tensile strength (TS) and elongation (ε) were determined and statistically compared among groups employing 1-way ANOVA, SNK multiple comparison test (α=.05) and Weibull analysis where Weibull modulus m and characteristic strength σο were identified. Fractured surfaces were imaged by a SEM. RESULTS SEM/EDX analysis showed that cast alloy consists of two phases with differences in mean atomic number contrast, while no mean atomic number was identified for welded regions. EDX analysis revealed an increased Cr and Mo content at the alloy-joint interface. All mechanical properties of group I (TS, ε, m and σο) were found inferior to R while group K showed intermediated values without significant differences to R and I, apart from elongation with group R. The fractured surfaces of all groups showed extensive dendritic pattern although with a finer structure in the case of welded groups. CONCLUSION The K shape joint configuration should be preferred over the I, as it demonstrates improved mechanical strength and survival probability. PMID:25722836

  11. Colonization and extinction in dynamic habitats: an occupancy approach for a Great Plains stream fish assemblage.

    PubMed

    Falke, Jeffrey A; Bailey, Larissa L; Fausch, Kurt D; Bestgen, Kevin R

    2012-04-01

    Despite the importance of habitat in determining species distribution and persistence, habitat dynamics are rarely modeled in studies of metapopulations. We used an integrated habitat-occupancy model to simultaneously quantify habitat change, site fidelity, and local colonization and extinction rates for larvae of a suite of Great Plains stream fishes in the Arikaree River, eastern Colorado, USA, across three years. Sites were located along a gradient of flow intermittency and groundwater connectivity. Hydrology varied across years: the first and third being relatively wet and the second dry. Despite hydrologic variation, our results indicated that site suitability was random from one year to the next. Occupancy probabilities were also independent of previous habitat and occupancy state for most species, indicating little site fidelity. Climate and groundwater connectivity were important drivers of local extinction and colonization, but the importance of groundwater differed between periods. Across species, site extinction probabilities were highest during the transition from wet to dry conditions (range: 0.52-0.98), and the effect of groundwater was apparent with higher extinction probabilities for sites not fed by groundwater. Colonization probabilities during this period were relatively low for both previously dry sites (range: 0.02-0.38) and previously wet sites (range: 0.02-0.43). In contrast, no sites dried or remained dry during the transition from dry to wet conditions, yielding lower but still substantial extinction probabilities (range: 0.16-0.63) and higher colonization probabilities (range: 0.06-0.86), with little difference among sites with and without groundwater. This approach of jointly modeling both habitat change and species occupancy will likely be useful to incorporate effects of dynamic habitat on metapopulation processes and to better inform appropriate conservation actions.

  12. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  13. Deterministic joint remote preparation of an equatorial hybrid state via high-dimensional Einstein-Podolsky-Rosen pairs: active versus passive receiver

    NASA Astrophysics Data System (ADS)

    Bich, Cao Thi; Dat, Le Thanh; Van Hop, Nguyen; An, Nguyen Ba

    2018-04-01

    Entanglement plays a vital and in many cases non-replaceable role in the quantum network communication. Here, we propose two new protocols to jointly and remotely prepare a special so-called bipartite equatorial state which is hybrid in the sense that it entangles two Hilbert spaces with arbitrary different dimensions D and N (i.e., a type of entanglement between a quDit and a quNit). The quantum channels required to do that are however not necessarily hybrid. In fact, we utilize four high-dimensional Einstein-Podolsky-Rosen pairs, two of which are quDit-quDit entanglements, while the other two are quNit-quNit ones. In the first protocol the receiver has to be involved actively in the process of remote state preparation, while in the second protocol the receiver is passive as he/she needs to participate only in the final step for reconstructing the target hybrid state. Each protocol meets a specific circumstance that may be encountered in practice and both can be performed with unit success probability. Moreover, the concerned equatorial hybrid entangled state can also be jointly prepared for two receivers at two separated locations by slightly modifying the initial particles' distribution, thereby establishing between them an entangled channel ready for a later use.

  14. Topological characterization of antireflective and hydrophobic rough surfaces: are random process theory and fractal modeling applicable?

    NASA Astrophysics Data System (ADS)

    Borri, Claudia; Paggi, Marco

    2015-02-01

    The random process theory (RPT) has been widely applied to predict the joint probability distribution functions (PDFs) of asperity heights and curvatures of rough surfaces. A check of the predictions of RPT against the actual statistics of numerically generated random fractal surfaces and of real rough surfaces has been only partially undertaken. The present experimental and numerical study provides a deep critical comparison on this matter, providing some insight into the capabilities and limitations in applying RPT and fractal modeling to antireflective and hydrophobic rough surfaces, two important types of textured surfaces. A multi-resolution experimental campaign using a confocal profilometer with different lenses is carried out and a comprehensive software for the statistical description of rough surfaces is developed. It is found that the topology of the analyzed textured surfaces cannot be fully described according to RPT and fractal modeling. The following complexities emerge: (i) the presence of cut-offs or bi-fractality in the power-law power-spectral density (PSD) functions; (ii) a more pronounced shift of the PSD by changing resolution as compared to what was expected from fractal modeling; (iii) inaccuracy of the RPT in describing the joint PDFs of asperity heights and curvatures of textured surfaces; (iv) lack of resolution-invariance of joint PDFs of textured surfaces in case of special surface treatments, not accounted for by fractal modeling.

  15. Practical quantum private query with better performance in resisting joint-measurement attack

    NASA Astrophysics Data System (ADS)

    Wei, Chun-Yan; Wang, Tian-Yin; Gao, Fei

    2016-04-01

    As a kind of practical protocol, quantum-key-distribution (QKD)-based quantum private queries (QPQs) have drawn lots of attention. However, joint-measurement (JM) attack poses a noticeable threat to the database security in such protocols. That is, by JM attack a malicious user can illegally elicit many more items from the database than the average amount an honest one can obtain. Taking Jacobi et al.'s protocol as an example, by JM attack a malicious user can obtain as many as 500 bits, instead of the expected 2.44 bits, from a 104-bit database in one query. It is a noticeable security flaw in theory, and would also arise in application with the development of quantum memories. To solve this problem, we propose a QPQ protocol based on a two-way QKD scheme, which behaves much better in resisting JM attack. Concretely, the user Alice cannot get more database items by conducting JM attack on the qubits because she has to send them back to Bob (the database holder) before knowing which of them should be jointly measured. Furthermore, JM attack by both Alice and Bob would be detected with certain probability, which is quite different from previous protocols. Moreover, our protocol retains the good characters of QKD-based QPQs, e.g., it is loss tolerant and robust against quantum memory attack.

  16. Strength and Mechanics of Bonded Scarf Joints for Repair of Composite Materials

    NASA Technical Reports Server (NTRS)

    Pipes, R. B.; Adkins, D. W.

    1982-01-01

    Experimental and analytical investigations of scarf joints indicate that slight bluntness of adherend tips induces adhesive stress concentrations which significantly reduce joint strength, and the stress distribution through the adhesive thickness is non-uniform and has significant stress concentrations at the ends of the joint. The laminate stacking sequence can have important effects on the adhesive stress distribution. A significant improvement in joint strength is possible by increasing overlap at the expense of raising the repair slightly above the original surface. Although a surface grinder was used to make most experimental specimens, a hand held rotary bur can make a surprisingly good scarf. Scarf joints wit doublers on one side, such as might be used for repair, bend under tensile loads and may actually be weaker than joints without doublers.

  17. Joint search and sensor management for geosynchronous satellites

    NASA Astrophysics Data System (ADS)

    Zatezalo, A.; El-Fallah, A.; Mahler, R.; Mehra, R. K.; Pham, K.

    2008-04-01

    Joint search and sensor management for space situational awareness presents daunting scientific and practical challenges as it requires a simultaneous search for new, and the catalog update of the current space objects. We demonstrate a new approach to joint search and sensor management by utilizing the Posterior Expected Number of Targets (PENT) as the objective function, an observation model for a space-based EO/IR sensor, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker. Simulation and results using actual Geosynchronous Satellites are presented.

  18. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  19. Birth/birth-death processes and their computable transition probabilities with biological applications.

    PubMed

    Ho, Lam Si Tung; Xu, Jason; Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2018-03-01

    Birth-death processes track the size of a univariate population, but many biological systems involve interaction between populations, necessitating models for two or more populations simultaneously. A lack of efficient methods for evaluating finite-time transition probabilities of bivariate processes, however, has restricted statistical inference in these models. Researchers rely on computationally expensive methods such as matrix exponentiation or Monte Carlo approximation, restricting likelihood-based inference to small systems, or indirect methods such as approximate Bayesian computation. In this paper, we introduce the birth/birth-death process, a tractable bivariate extension of the birth-death process, where rates are allowed to be nonlinear. We develop an efficient algorithm to calculate its transition probabilities using a continued fraction representation of their Laplace transforms. Next, we identify several exemplary models arising in molecular epidemiology, macro-parasite evolution, and infectious disease modeling that fall within this class, and demonstrate advantages of our proposed method over existing approaches to inference in these models. Notably, the ubiquitous stochastic susceptible-infectious-removed (SIR) model falls within this class, and we emphasize that computable transition probabilities newly enable direct inference of parameters in the SIR model. We also propose a very fast method for approximating the transition probabilities under the SIR model via a novel branching process simplification, and compare it to the continued fraction representation method with application to the 17th century plague in Eyam. Although the two methods produce similar maximum a posteriori estimates, the branching process approximation fails to capture the correlation structure in the joint posterior distribution.

  20. Effect of joint spacing and joint dip on the stress distribution around tunnels using different numerical methods

    NASA Astrophysics Data System (ADS)

    Nikadat, Nooraddin; Fatehi Marji, Mohammad; Rahmannejad, Reza; Yarahmadi Bafghi, Alireza

    2016-11-01

    Different conditions may affect the stability of tunnels by the geometry (spacing and orientation) of joints in the surrounded rock mass. In this study, by comparing the results obtained by the three novel numerical methods i.e. finite element method (Phase2), discrete element method (UDEC) and indirect boundary element method (TFSDDM), the effects of joint spacing and joint dips on the stress distribution around rock tunnels are numerically studied. These comparisons indicate the validity of the stress analyses around circular rock tunnels. These analyses also reveal that for a semi-continuous environment, boundary element method gives more accurate results compared to the results of finite element and distinct element methods. In the indirect boundary element method, the displacements due to joints of different spacing and dips are estimated by using displacement discontinuity (DD) formulations and the total stress distribution around the tunnel are obtained by using fictitious stress (FS) formulations.

  1. Evaluation of the joint distribution at disease presentation of patients with rheumatoid arthritis: a large study across continents.

    PubMed

    Bergstra, Sytske Anne; Chopra, Arvind; Saluja, Manjit; Vega-Morales, David; Govind, Nimmisha; Huizinga, Tom W J; van der Helm-van Mil, Annette

    2017-01-01

    Genetic and environmental risk factors for rheumatoid arthritis (RA) are population dependent and may affect disease expression. Therefore, we studied tender and swollen joint involvement in patients newly diagnosed with RA in four countries and performed a subanalysis within countries to assess whether the influence of autoantibody positivity affected disease expression. Patients with symptom duration <2 years fulfilling the American College of Rheumatology/European League Against Rheumatism 2010 RA classification criteria were selected from METEOR (Measurement of Efficacy of Treatment in the Era of Outcome in Rheumatology), an international observational database, and the Dutch Leiden Early Arthritis Clinic. Indian (n=947), Mexican (n=141), South African (n=164) and Dutch (n=947) autoantibody-positive and negative patients with RA, matched by symptom duration, were studied for swollen and tender joint distribution. Between countries, the reported distribution of swollen joint distribution differed, with more knee synovitis in Mexico, South Africa and India compared with the Netherlands (37%, 36%, 30% and 13%) and more elbow (29%, 23%, 7%, 7%) and shoulder synovitis (21%, 11%, 0%, 1%) in Mexico and South Africa compared with India and the Netherlands.Since the number of autoantibody-negative patients in Mexico and South Africa was limited, Indian and Dutch autoantibody-positive and negative patients with RA were compared. The number of swollen and tender joints was higher in autoantibody-negative patients, but the overall distribution of involved joints was similar. Joint involvement at diagnosis does not differ between autoantibody-positive and negative patients with RA in India and the Netherlands. However, joint involvement is reported differently across countries. More research is needed whether these differences are cultural and/or pathogenetic.

  2. Joint modelling of annual maximum drought severity and corresponding duration

    NASA Astrophysics Data System (ADS)

    Tosunoglu, Fatih; Kisi, Ozgur

    2016-12-01

    In recent years, the joint distribution properties of drought characteristics (e.g. severity, duration and intensity) have been widely evaluated using copulas. However, history of copulas in modelling drought characteristics obtained from streamflow data is still short, especially in semi-arid regions, such as Turkey. In this study, unlike previous studies, drought events are characterized by annual maximum severity (AMS) and corresponding duration (CD) which are extracted from daily streamflow of the seven gauge stations located in Çoruh Basin, Turkey. On evaluation of the various univariate distributions, the Exponential, Weibull and Logistic distributions are identified as marginal distributions for the AMS and CD series. Archimedean copulas, namely Ali-Mikhail-Haq, Clayton, Frank and Gumbel-Hougaard, are then employed to model joint distribution of the AMS and CD series. With respect to the Anderson Darling and Cramér-von Mises statistical tests and the tail dependence assessment, Gumbel-Hougaard copula is identified as the most suitable model for joint modelling of the AMS and CD series at each station. Furthermore, the developed Gumbel-Hougaard copulas are used to derive the conditional and joint return periods of the AMS and CD series which can be useful for designing and management of reservoirs in the basin.

  3. Copula-based assessment of the relationship between food peaks and flood volumes using information on historical floods by Bayesian Monte Carlo Markov Chain simulations

    NASA Astrophysics Data System (ADS)

    Gaál, Ladislav; Szolgay, Ján.; Bacigál, Tomáå.¡; Kohnová, Silvia

    2010-05-01

    Copula-based estimation methods of hydro-climatological extremes have increasingly been gaining attention of researchers and practitioners in the last couple of years. Unlike the traditional estimation methods which are based on bivariate cumulative distribution functions (CDFs), copulas are a relatively flexible tool of statistics that allow for modelling dependencies between two or more variables such as flood peaks and flood volumes without making strict assumptions on the marginal distributions. The dependence structure and the reliability of the joint estimates of hydro-climatological extremes, mainly in the right tail of the joint CDF not only depends on the particular copula adopted but also on the data available for the estimation of the marginal distributions of the individual variables. Generally, data samples for frequency modelling have limited temporal extent, which is a considerable drawback of frequency analyses in practice. Therefore, it is advised to deal with statistical methods that improve any part of the process of copula construction and result in more reliable design values of hydrological variables. The scarcity of the data sample mostly in the extreme tail of the joint CDF can be bypassed, e.g., by using a considerably larger amount of simulated data by rainfall-runoff analysis or by including historical information on the variables under study. The latter approach of data extension is used here to make the quantile estimates of the individual marginals of the copula more reliable. In the presented paper it is proposed to use historical information in the frequency analysis of the marginal distributions in the framework of Bayesian Monte Carlo Markov Chain (MCMC) simulations. Generally, a Bayesian approach allows for a straightforward combination of different sources of information on floods (e.g. flood data from systematic measurements and historical flood records, respectively) in terms of a product of the corresponding likelihood functions. On the other hand, the MCMC algorithm is a numerical approach for sampling from the likelihood distributions. The Bayesian MCMC methods therefore provide an attractive way to estimate the uncertainty in parameters and quantile metrics of frequency distributions. The applicability of the method is demonstrated in a case study of the hydroelectric power station Orlík on the Vltava River. This site has a key role in the flood prevention of Prague, the capital city of the Czech Republic. The record length of the available flood data is 126 years from the period 1877-2002, while the flood event observed in 2002 that caused extensive damages and numerous casualties is treated as a historic one. To estimate the joint probabilities of flood peaks and volumes, different copulas are fitted and their goodness-of-fit are evaluated by bootstrap simulations. Finally, selected quantiles of flood volumes conditioned on given flood peaks are derived and compared with those obtained by the traditional method used in the practice of water management specialists of the Vltava River.

  4. Characterization of the spatial variability of channel morphology

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    2002-01-01

    The spatial variability of two fundamental morphological variables is investigated for rivers having a wide range of discharge (five orders of magnitude). The variables, water-surface width and average depth, were measured at 58 to 888 equally spaced cross-sections in channel links (river reaches between major tributaries). These measurements provide data to characterize the two-dimensional structure of a channel link which is the fundamental unit of a channel network. The morphological variables have nearly log-normal probability distributions. A general relation was determined which relates the means of the log-transformed variables to the logarithm of discharge similar to previously published downstream hydraulic geometry relations. The spatial variability of the variables is described by two properties: (1) the coefficient of variation which was nearly constant (0.13-0.42) over a wide range of discharge; and (2) the integral length scale in the downstream direction which was approximately equal to one to two mean channel widths. The joint probability distribution of the morphological variables in the downstream direction was modelled as a first-order, bivariate autoregressive process. This model accounted for up to 76 per cent of the total variance. The two-dimensional morphological variables can be scaled such that the channel width-depth process is independent of discharge. The scaling properties will be valuable to modellers of both basin and channel dynamics. Published in 2002 John Wiley and Sons, Ltd.

  5. A hybrid probabilistic/spectral model of scalar mixing

    NASA Astrophysics Data System (ADS)

    Vaithianathan, T.; Collins, Lance

    2002-11-01

    In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.

  6. Gaussian closure technique applied to the hysteretic Bouc model with non-zero mean white noise excitation

    NASA Astrophysics Data System (ADS)

    Waubke, Holger; Kasess, Christian H.

    2016-11-01

    Devices that emit structure-borne sound are commonly decoupled by elastic components to shield the environment from acoustical noise and vibrations. The elastic elements often have a hysteretic behavior that is typically neglected. In order to take hysteretic behavior into account, Bouc developed a differential equation for such materials, especially joints made of rubber or equipped with dampers. In this work, the Bouc model is solved by means of the Gaussian closure technique based on the Kolmogorov equation. Kolmogorov developed a method to derive probability density functions for arbitrary explicit first-order vector differential equations under white noise excitation using a partial differential equation of a multivariate conditional probability distribution. Up to now no analytical solution of the Kolmogorov equation in conjunction with the Bouc model exists. Therefore a wide range of approximate solutions, especially the statistical linearization, were developed. Using the Gaussian closure technique that is an approximation to the Kolmogorov equation assuming a multivariate Gaussian distribution an analytic solution is derived in this paper for the Bouc model. For the stationary case the two methods yield equivalent results, however, in contrast to statistical linearization the presented solution allows to calculate the transient behavior explicitly. Further, stationary case leads to an implicit set of equations that can be solved iteratively with a small number of iterations and without instabilities for specific parameter sets.

  7. The influence of coordinated defects on inhomogeneous broadening in cubic lattices

    NASA Astrophysics Data System (ADS)

    Matheson, P. L.; Sullivan, Francis P.; Evenson, William E.

    2016-12-01

    The joint probability distribution function (JPDF) of electric field gradient (EFG) tensor components in cubic materials is dominated by coordinated pairings of defects in shells near probe nuclei. The contributions from these inner shell combinations and their surrounding structures contain the essential physics that determine the PAC-relevant quantities derived from them. The JPDF can be used to predict the nature of inhomogeneous broadening (IHB) in perturbed angular correlation (PAC) experiments by modeling the G 2 spectrum and finding expectation values for V zz and η. The ease with which this can be done depends upon the representation of the JPDF. Expanding on an earlier work by Czjzek et al. (Hyperfine Interact. 14, 189-194, 1983), Evenson et al. (Hyperfine Interact. 237, 119, 2016) provide a set of coordinates constructed from the EFG tensor invariants they named W 1 and W 2. Using this parameterization, the JPDF in cubic structures was constructed using a point charge model in which a single trapped defect (TD) is the nearest neighbor to a probe nucleus. Individual defects on nearby lattice sites pair with the TD to provide a locus of points in the W 1- W 2 plane around which an amorphous-like distribution of probability density grows. Interestingly, however, marginal, separable PDFs appear adequate to model IHB relevant cases. We present cases from simulations in cubic materials illustrating the importance of these near-shell coordinations.

  8. Simultaneous modeling of habitat suitability, occupancy, and relative abundance: African elephants in Zimbabwe

    USGS Publications Warehouse

    Martin, Julien; Chamaille-Jammes, Simon; Nichols, James D.; Fritz, Herve; Hines, James E.; Fonnesbeck, Christopher J.; MacKenzie, Darryl I.; Bailey, Larissa L.

    2010-01-01

    The recent development of statistical models such as dynamic site occupancy models provides the opportunity to address fairly complex management and conservation problems with relatively simple models. However, surprisingly few empirical studies have simultaneously modeled habitat suitability and occupancy status of organisms over large landscapes for management purposes. Joint modeling of these components is particularly important in the context of management of wild populations, as it provides a more coherent framework to investigate the population dynamics of organisms in space and time for the application of management decision tools. We applied such an approach to the study of water hole use by African elephants in Hwange National Park, Zimbabwe. Here we show how such methodology may be implemented and derive estimates of annual transition probabilities among three dry-season states for water holes: (1) unsuitable state (dry water holes with no elephants); (2) suitable state (water hole with water) with low abundance of elephants; and (3) suitable state with high abundance of elephants. We found that annual rainfall and the number of neighboring water holes influenced the transition probabilities among these three states. Because of an increase in elephant densities in the park during the study period, we also found that transition probabilities from low abundance to high abundance states increased over time. The application of the joint habitat–occupancy models provides a coherent framework to examine how habitat suitability and factors that affect habitat suitability influence the distribution and abundance of organisms. We discuss how these simple models can further be used to apply structured decision-making tools in order to derive decisions that are optimal relative to specified management objectives. The modeling framework presented in this paper should be applicable to a wide range of existing data sets and should help to address important ecological, conservation, and management problems that deal with occupancy, relative abundance, and habitat suitability.

  9. Numerical simulation of artificial hip joint motion based on human age factor

    NASA Astrophysics Data System (ADS)

    Ramdhani, Safarudin; Saputra, Eko; Jamari, J.

    2018-05-01

    Artificial hip joint is a prosthesis (synthetic body part) which usually consists of two or more components. Replacement of the hip joint due to the occurrence of arthritis, ordinarily patients aged or older. Numerical simulation models are used to observe the range of motion in the artificial hip joint, the range of motion of joints used as the basis of human age. Finite- element analysis (FEA) is used to calculate stress von mises in motion and observes a probability of prosthetic impingement. FEA uses a three-dimensional nonlinear model and considers the position variation of acetabular liner cups. The result of numerical simulation shows that FEA method can be used to analyze the performance calculation of the artificial hip joint at this time more accurate than conventional method.

  10. Heterogeneity-induced large deviations in activity and (in some cases) entropy production

    NASA Astrophysics Data System (ADS)

    Gingrich, Todd R.; Vaikuntanathan, Suriyanarayanan; Geissler, Phillip L.

    2014-10-01

    We solve a simple model that supports a dynamic phase transition and show conditions for the existence of the transition. Using methods of large deviation theory we analytically compute the probability distribution for activity and entropy production rates of the trajectories on a large ring with a single heterogeneous link. The corresponding joint rate function demonstrates two dynamical phases—one localized and the other delocalized, but the marginal rate functions do not always exhibit the underlying transition. Symmetries in dynamic order parameters influence the observation of a transition, such that distributions for certain dynamic order parameters need not reveal an underlying dynamical bistability. Solution of our model system furthermore yields the form of the effective Markov transition matrices that generate dynamics in which the two dynamical phases are at coexistence. We discuss the implications of the transition for the response of bacterial cells to antibiotic treatment, arguing that even simple models of a cell cycle lacking an explicit bistability in configuration space will exhibit a bistability of dynamical phases.

  11. Annealed scaling for a charged polymer in dimensions two and higher

    NASA Astrophysics Data System (ADS)

    Berger, Q.; den Hollander, F.; Poisat, J.

    2018-02-01

    This paper considers an undirected polymer chain on {Z}d , d ≥slant 2 , with i.i.d. random charges attached to its constituent monomers. Each self-intersection of the polymer chain contributes an energy to the interaction Hamiltonian that is equal to the product of the charges of the two monomers that meet. The joint probability distribution for the polymer chain and the charges is given by the Gibbs distribution associated with the interaction Hamiltonian. The object of interest is the annealed free energy per monomer in the limit as the length n of the polymer chain tends to infinity. We show that there is a critical curve in the parameter plane spanned by the charge bias and the inverse temperature separating an extended phase from a collapsed phase. We derive the scaling of the critical curve for small and for large charge bias and the scaling of the annealed free energy for small inverse temperature. We argue that in the collapsed phase the polymer chain is subdiffusive, namely, on scale \

  12. Signatures of combinatorial regulation in intrinsic biological noise

    PubMed Central

    Warmflash, Aryeh; Dinner, Aaron R.

    2008-01-01

    Gene expression is controlled by the action of transcription factors that bind to DNA and influence the rate at which a gene is transcribed. The quantitative mapping between the regulator concentrations and the output of the gene is known as the cis-regulatory input function (CRIF). Here, we show how the CRIF shapes the form of the joint probability distribution of molecular copy numbers of the regulators and the product of a gene. Namely, we derive a class of fluctuation-based relations that relate the moments of the distribution to the derivatives of the CRIF. These relations are useful because they enable statistics of naturally arising cell-to-cell variations in molecular copy numbers to substitute for traditional manipulations for probing regulatory mechanisms. We demonstrate that these relations can distinguish super- and subadditive gene regulatory scenarios (molecular analogs of AND and OR logic operations) in simulations that faithfully represent bacterial gene expression. Applications and extensions to other regulatory scenarios are discussed. PMID:18981421

  13. Mutation-selection balance in mixed mating populations.

    PubMed

    Kelly, John K

    2007-05-21

    An approximation to the average number of deleterious mutations per gamete, Q, is derived from a model allowing selection on both zygotes and male gametes. Progeny are produced by either outcrossing or self-fertilization with fixed probabilities. The genetic model is a standard in evolutionary biology: mutations occur at unlinked loci, have equivalent effects, and combine multiplicatively to determine fitness. The approximation developed here treats individual mutation counts with a generalized Poisson model conditioned on the distribution of selfing histories in the population. The approximation is accurate across the range of parameter sets considered and provides both analytical insights and greatly increased computational speed. Model predictions are discussed in relation to several outstanding problems, including the estimation of the genomic deleterious mutation rates (U), the generality of "selective interference" among loci, and the consequences of gametic selection for the joint distribution of inbreeding depression and mating system across species. Finally, conflicting results from previous analytical treatments of mutation-selection balance are resolved to assumptions about the life-cycle and the initial fate of mutations.

  14. Prevalence of different temporomandibular joint sounds, with emphasis on disc-displacement, in patients with temporomandibular disorders and controls.

    PubMed

    Elfving, Lars; Helkimo, Martti; Magnusson, Tomas

    2002-01-01

    Temporomandibular joint (TMJ) sounds are very common among patients with temporomandibular disorders (TMD), but also in non-patient populations. A variety of different causes to TMJ-sounds have been suggested e.g. arthrotic changes in the TMJs, anatomical variations, muscular incoordination and disc displacement. In the present investigation, the prevalence and type of different joint sounds were registered in 125 consecutive patients with suspected TMD and in 125 matched controls. Some kind of joint sound was recorded in 56% of the TMD patients and in 36% of the controls. The awareness of joint sounds was higher among TMD patients as compared to controls (88% and 60% respectively). The most common sound recorded in both groups was reciprocal clickings indicative of a disc displacement, while not one single case fulfilling the criteria for clicking due to a muscular incoordination was found. In the TMD group women with disc displacement reported sleeping on the stomach significantly more often than women without disc displacement did. An increased general joint laxity was found in 39% of the TMD patients with disc displacement, while this was found in only 9% of the patients with disc displacement in the control group. To conclude, disc displacement is probably the most common cause to TMJ sounds, while the existence of TMJ sounds due to a muscular incoordination can be questioned. Furthermore, sleeping on the stomach might be associated with disc displacement, while general joint laxity is probably not a causative factor, but a seeking care factor in patients with disc displacement.

  15. Bayesian Networks for enterprise risk assessment

    NASA Astrophysics Data System (ADS)

    Bonafede, C. E.; Giudici, P.

    2007-08-01

    According to different typologies of activity and priority, risks can assume diverse meanings and it can be assessed in different ways. Risk, in general, is measured in terms of a probability combination of an event (frequency) and its consequence (impact). To estimate the frequency and the impact (severity) historical data or expert opinions (either qualitative or quantitative data) are used. Moreover, qualitative data must be converted in numerical values or bounds to be used in the model. In the case of enterprise risk assessment the considered risks are, for instance, strategic, operational, legal and of image, which many times are difficult to be quantified. So in most cases only expert data, gathered by scorecard approaches, are available for risk analysis. The Bayesian Networks (BNs) are a useful tool to integrate different information and in particular to study the risk's joint distribution by using data collected from experts. In this paper we want to show a possible approach for building a BN in the particular case in which only prior probabilities of node states and marginal correlations between nodes are available, and when the variables have only two states.

  16. Bivariate frequency analysis of rainfall intensity and duration for urban stormwater infrastructure design

    NASA Astrophysics Data System (ADS)

    Jun, Changhyun; Qin, Xiaosheng; Gan, Thian Yew; Tung, Yeou-Koung; De Michele, Carlo

    2017-10-01

    This study presents a storm-event based bivariate frequency analysis approach to determine design rainfalls in which, the number, intensity and duration of actual rainstorm events were considered. To derive more realistic design storms, the occurrence probability of an individual rainstorm event was determined from the joint distribution of storm intensity and duration through a copula model. Hourly rainfall data were used at three climate stations respectively located in Singapore, South Korea and Canada. It was found that the proposed approach could give a more realistic description of rainfall characteristics of rainstorm events and design rainfalls. As results, the design rainfall quantities from actual rainstorm events at the three studied sites are consistently lower than those obtained from the conventional rainfall depth-duration-frequency (DDF) method, especially for short-duration storms (such as 1-h). It results from occurrence probabilities of each rainstorm event and a different angle for rainfall frequency analysis, and could offer an alternative way of describing extreme rainfall properties and potentially help improve the hydrologic design of stormwater management facilities in urban areas.

  17. Classification of resistance to passive motion using minimum probability of error criterion.

    PubMed

    Chan, H C; Manry, M T; Kondraske, G V

    1987-01-01

    Neurologists diagnose many muscular and nerve disorders by classifying the resistance to passive motion of patients' limbs. Over the past several years, a computer-based instrument has been developed for automated measurement and parameterization of this resistance. In the device, a voluntarily relaxed lower extremity is moved at constant velocity by a motorized driver. The torque exerted on the extremity by the machine is sampled, along with the angle of the extremity. In this paper a computerized technique is described for classifying a patient's condition as 'Normal' or 'Parkinson disease' (rigidity), from the torque versus angle curve for the knee joint. A Legendre polynomial, fit to the curve, is used to calculate a set of eight normally distributed features of the curve. The minimum probability of error approach is used to classify the curve as being from a normal or Parkinson disease patient. Data collected from 44 different subjects was processes and the results were compared with an independent physician's subjective assessment of rigidity. There is agreement in better than 95% of the cases, when all of the features are used.

  18. Aquatic predicted no-effect concentrations of 16 polycyclic aromatic hydrocarbons and their ecological risks in surface seawater of Liaodong Bay, China.

    PubMed

    Wang, Ying; Wang, Juying; Mu, Jingli; Wang, Zhen; Cong, Yi; Yao, Ziwei; Lin, Zhongsheng

    2016-06-01

    Polycyclic aromatic hydrocarbons (PAHs), a class of ubiquitous pollutants in marine environments, exhibit moderate to high adverse effects on aquatic organisms and humans. However, the lack of PAH toxicity data for aquatic organism has limited evaluation of their ecological risks. In the present study, aquatic predicted no-effect concentrations (PNECs) of 16 priority PAHs were derived based on species sensitivity distribution models, and their probabilistic ecological risks in seawater of Liaodong Bay, Bohai Sea, China, were assessed. A quantitative structure-activity relationship method was adopted to achieve the predicted chronic toxicity data for the PNEC derivation. Good agreement for aquatic PNECs of 8 PAHs based on predicted and experimental chronic toxicity data was observed (R(2)  = 0.746), and the calculated PNECs ranged from 0.011 µg/L to 205.3 µg/L. A significant log-linear relationship also existed between the octanol-water partition coefficient and PNECs derived from experimental toxicity data (R(2)  = 0.757). A similar order of ecological risks for the 16 PAH species in seawater of Liaodong Bay was found by probabilistic risk quotient and joint probability curve methods. The individual high ecological risk of benzo[a]pyrene, benzo[b]fluoranthene, and benz[a]anthracene needs to be determined. The combined ecological risk of PAHs in seawater of Liaodong Bay calculated by the joint probability curve method was 13.9%, indicating a high risk as a result of co-exposure to PAHs. Environ Toxicol Chem 2016;35:1587-1593. © 2015 SETAC. © 2015 SETAC.

  19. Flood Risk Due to Hurricane Flooding

    NASA Astrophysics Data System (ADS)

    Olivera, Francisco; Hsu, Chih-Hung; Irish, Jennifer

    2015-04-01

    In this study, we evaluated the expected economic losses caused by hurricane inundation. We used surge response functions, which are physics-based dimensionless scaling laws that give surge elevation as a function of the hurricane's parameters (i.e., central pressure, radius, forward speed, approach angle and landfall location) at specified locations along the coast. These locations were close enough to avoid significant changes in surge elevations between consecutive points, and distant enough to minimize calculations. The probability of occurrence of a surge elevation value at a given location was estimated using a joint probability distribution of the hurricane parameters. The surge elevation, at the shoreline, was assumed to project horizontally inland within a polygon of influence. Individual parcel damage was calculated based on flood water depth and damage vs. depth curves available for different building types from the HAZUS computer application developed by the Federal Emergency Management Agency (FEMA). Parcel data, including property value and building type, were obtained from the county appraisal district offices. The expected economic losses were calculated as the sum of the products of the estimated parcel damages and their probability of occurrence for the different storms considered. Anticipated changes for future climate scenarios were considered by accounting for projected hurricane intensification, as indicated by sea surface temperature rise, and sea level rise, which modify the probability distribution of hurricane central pressure and change the baseline of the damage calculation, respectively. Maps of expected economic losses have been developed for Corpus Christi in Texas, Gulfport in Mississippi and Panama City in Florida. Specifically, for Port Aransas, in the Corpus Christi area, it was found that the expected economic losses were in the range of 1% to 4% of the property value for current climate conditions, of 1% to 8% for the 2030's and of 1% to 14% for the 2080's.

  20. Cooperative Adaptive Output Regulation for Second-Order Nonlinear Multiagent Systems With Jointly Connected Switching Networks.

    PubMed

    Liu, Wei; Huang, Jie

    2018-03-01

    This paper studies the cooperative global robust output regulation problem for a class of heterogeneous second-order nonlinear uncertain multiagent systems with jointly connected switching networks. The main contributions consist of the following three aspects. First, we generalize the result of the adaptive distributed observer from undirected jointly connected switching networks to directed jointly connected switching networks. Second, by performing a new coordinate and input transformation, we convert our problem into the cooperative global robust stabilization problem of a more complex augmented system via the distributed internal model principle. Third, we solve the stabilization problem by a distributed state feedback control law. Our result is illustrated by the leader-following consensus problem for a group of Van der Pol oscillators.

  1. Thermographic Analysis of Stress Distribution in Welded Joints

    NASA Astrophysics Data System (ADS)

    Piršić, T.; Krstulović Opara, L.; Domazet, Ž.

    2010-06-01

    The fatigue life prediction of welded joints based on S-N curves in conjunction with nominal stresses generally is not reliable. Stress distribution in welded area affected by geometrical inhomogeneity, irregular welded surface and weld toe radius is quite complex, so the local (structural) stress concept is accepted in recent papers. The aim of this paper is to determine the stress distribution in plate type aluminum welded joints, to analyze the reliability of TSA (Thermal Stress Analysis) in this kind of investigations, and to obtain numerical values for stress concentration factors for practical use. Stress distribution in aluminum butt and fillet welded joints is determined by using the three different methods: strain gauges measurement, thermal stress analysis and FEM. Obtained results show good agreement - the TSA mutually confirmed the FEM model and stresses measured by strain gauges. According to obtained results, it may be stated that TSA, as a relatively new measurement technique may in the future become a standard tool for the experimental investigation of stress concentration and fatigue in welded joints that can help to develop more accurate numerical tools for fatigue life prediction.

  2. Statistical inference of the generation probability of T-cell receptors from sequence repertoires.

    PubMed

    Murugan, Anand; Mora, Thierry; Walczak, Aleksandra M; Callan, Curtis G

    2012-10-02

    Stochastic rearrangement of germline V-, D-, and J-genes to create variable coding sequence for certain cell surface receptors is at the origin of immune system diversity. This process, known as "VDJ recombination", is implemented via a series of stochastic molecular events involving gene choices and random nucleotide insertions between, and deletions from, genes. We use large sequence repertoires of the variable CDR3 region of human CD4+ T-cell receptor beta chains to infer the statistical properties of these basic biochemical events. Because any given CDR3 sequence can be produced in multiple ways, the probability distribution of hidden recombination events cannot be inferred directly from the observed sequences; we therefore develop a maximum likelihood inference method to achieve this end. To separate the properties of the molecular rearrangement mechanism from the effects of selection, we focus on nonproductive CDR3 sequences in T-cell DNA. We infer the joint distribution of the various generative events that occur when a new T-cell receptor gene is created. We find a rich picture of correlation (and absence thereof), providing insight into the molecular mechanisms involved. The generative event statistics are consistent between individuals, suggesting a universal biochemical process. Our probabilistic model predicts the generation probability of any specific CDR3 sequence by the primitive recombination process, allowing us to quantify the potential diversity of the T-cell repertoire and to understand why some sequences are shared between individuals. We argue that the use of formal statistical inference methods, of the kind presented in this paper, will be essential for quantitative understanding of the generation and evolution of diversity in the adaptive immune system.

  3. qPR: An adaptive partial-report procedure based on Bayesian inference.

    PubMed

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-08-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6-8 cue delays or 600-800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations.

  4. qPR: An adaptive partial-report procedure based on Bayesian inference

    PubMed Central

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-01-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6–8 cue delays or 600–800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations. PMID:27580045

  5. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    PubMed

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Derivation of a Multiparameter Gamma Model for Analyzing the Residence-Time Distribution Function for Nonideal Flow Systems as an Alternative to the Advection-Dispersion Equation

    DOE PAGES

    Embry, Irucka; Roland, Victor; Agbaje, Oluropo; ...

    2013-01-01

    A new residence-time distribution (RTD) function has been developed and applied to quantitative dye studies as an alternative to the traditional advection-dispersion equation (AdDE). The new method is based on a jointly combined four-parameter gamma probability density function (PDF). The gamma residence-time distribution (RTD) function and its first and second moments are derived from the individual two-parameter gamma distributions of randomly distributed variables, tracer travel distance, and linear velocity, which are based on their relationship with time. The gamma RTD function was used on a steady-state, nonideal system modeled as a plug-flow reactor (PFR) in the laboratory to validate themore » effectiveness of the model. The normalized forms of the gamma RTD and the advection-dispersion equation RTD were compared with the normalized tracer RTD. The normalized gamma RTD had a lower mean-absolute deviation (MAD) (0.16) than the normalized form of the advection-dispersion equation (0.26) when compared to the normalized tracer RTD. The gamma RTD function is tied back to the actual physical site due to its randomly distributed variables. The results validate using the gamma RTD as a suitable alternative to the advection-dispersion equation for quantitative tracer studies of non-ideal flow systems.« less

  7. Adaptive, Distributed Control of Constrained Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Bieniawski, Stefan; Wolpert, David H.

    2004-01-01

    Product Distribution (PO) theory was recently developed as a broad framework for analyzing and optimizing distributed systems. Here we demonstrate its use for adaptive distributed control of Multi-Agent Systems (MASS), i.e., for distributed stochastic optimization using MAS s. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (Probability dist&&on on the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. One common way to find that equilibrium is to have each agent run a Reinforcement Learning (E) algorithm. PD theory reveals this to be a particular type of search algorithm for minimizing the Lagrangian. Typically that algorithm i s quite inefficient. A more principled alternative is to use a variant of Newton's method to minimize the Lagrangian. Here we compare this alternative to RL-based search in three sets of computer experiments. These are the N Queen s problem and bin-packing problem from the optimization literature, and the Bar problem from the distributed RL literature. Our results confirm that the PD-theory-based approach outperforms the RL-based scheme in all three domains.

  8. Comparison of the effects of forefoot joint-preserving arthroplasty and resection-replacement arthroplasty on walking plantar pressure distribution and patient-based outcomes in patients with rheumatoid arthritis.

    PubMed

    Ebina, Kosuke; Hirao, Makoto; Takagi, Keishi; Ueno, Sachi; Morimoto, Tokimitsu; Matsuoka, Hozo; Kitaguchi, Kazuma; Iwahashi, Toru; Hashimoto, Jun; Yoshikawa, Hideki

    2017-01-01

    The purpose of this retrospective study is to clarify the difference in plantar pressure distribution during walking and related patient-based outcomes between forefoot joint-preserving arthroplasty and resection-replacement arthroplasty in patients with rheumatoid arthritis (RA). Four groups of patients were recruited. Group1 included 22 feet of 11 healthy controls (age 48.6 years), Group2 included 36 feet of 28 RA patients with deformed non-operated feet (age 64.8 years, Disease activity score assessing 28 joints with CRP [DAS28-CRP] 2.3), Group3 included 27 feet of 20 RA patients with metatarsal head resection-replacement arthroplasty (age 60.7 years, post-operative duration 5.6 years, DAS28-CRP 2.4), and Group4 included 34 feet of 29 RA patients with metatarsophalangeal (MTP) joint-preserving arthroplasty (age 64.6 years, post-operative duration 3.2 years, DAS28-CRP 2.3). Patients were cross-sectionally examined by F-SCAN II to evaluate walking plantar pressure, and the self-administered foot evaluation questionnaire (SAFE-Q). Twenty joint-preserving arthroplasty feet were longitudinally examined at both pre- and post-operation. In the 1st MTP joint, Group4 showed higher pressure distribution (13.7%) than Group2 (8.0%) and Group3 (6.7%) (P<0.001). In the 2nd-3rd MTP joint, Group4 showed lower pressure distribution (9.0%) than Group2 (14.5%) (P<0.001) and Group3 (11.5%) (P<0.05). On longitudinal analysis, Group4 showed increased 1st MTP joint pressure (8.5% vs. 14.7%; P<0.001) and decreased 2nd-3rd MTP joint pressure (15.2% vs. 10.7%; P<0.01) distribution. In the SAFE-Q subscale scores, Group4 showed higher scores than Group3 in pain and pain-related scores (84.1 vs. 71.7; P<0.01) and in shoe-related scores (62.5 vs. 43.1; P<0.01). Joint-preserving arthroplasty resulted in higher 1st MTP joint and lower 2nd-3rd MTP joint pressures than resection-replacement arthroplasty, which were associated with better patient-based outcomes.

  9. Investigation of Dielectric Breakdown Characteristics for Double-break Vacuum Interrupter and Dielectric Breakdown Probability Distribution in Vacuum Interrupter

    NASA Astrophysics Data System (ADS)

    Shioiri, Tetsu; Asari, Naoki; Sato, Junichi; Sasage, Kosuke; Yokokura, Kunio; Homma, Mitsutaka; Suzuki, Katsumi

    To investigate the reliability of equipment of vacuum insulation, a study was carried out to clarify breakdown probability distributions in vacuum gap. Further, a double-break vacuum circuit breaker was investigated for breakdown probability distribution. The test results show that the breakdown probability distribution of the vacuum gap can be represented by a Weibull distribution using a location parameter, which shows the voltage that permits a zero breakdown probability. The location parameter obtained from Weibull plot depends on electrode area. The shape parameter obtained from Weibull plot of vacuum gap was 10∼14, and is constant irrespective non-uniform field factor. The breakdown probability distribution after no-load switching can be represented by Weibull distribution using a location parameter. The shape parameter after no-load switching was 6∼8.5, and is constant, irrespective of gap length. This indicates that the scatter of breakdown voltage was increased by no-load switching. If the vacuum circuit breaker uses a double break, breakdown probability at low voltage becomes lower than single-break probability. Although potential distribution is a concern in the double-break vacuum cuicuit breaker, its insulation reliability is better than that of the single-break vacuum interrupter even if the bias of the vacuum interrupter's sharing voltage is taken into account.

  10. Using a Betabinomial distribution to estimate the prevalence of adherence to physical activity guidelines among children and youth.

    PubMed

    Garriguet, Didier

    2016-04-01

    Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.

  11. Lithostratigraphic interpretation from joint analysis of seismic tomography and magnetotelluric resistivity models using self-organizing map techniques

    NASA Astrophysics Data System (ADS)

    Bauer, K.; Muñoz, G.; Moeck, I.

    2012-12-01

    The combined interpretation of different models as derived from seismic tomography and magnetotelluric (MT) inversion represents a more efficient approach to determine the lithology of the subsurface compared with the separate treatment of each discipline. Such models can be developed independently or by application of joint inversion strategies. After the step of model generation using different geophysical methodologies, a joint interpretation work flow includes the following steps: (1) adjustment of a joint earth model based on the adapted, identical model geometry for the different methods, (2) classification of the model components (e.g. model blocks described by a set of geophysical parameters), and (3) re-mapping of the classified rock types to visualise their distribution within the earth model, and petrophysical characterization and interpretation. One possible approach for the classification of multi-parameter models is based on statistical pattern recognition, where different models are combined and translated into probability density functions. Classes of rock types are identified in these methods as isolated clusters with high probability density function values. Such techniques are well-established for the analysis of two-parameter models. Alternatively we apply self-organizing map (SOM) techniques, which have no limitations in the number of parameters to be analysed in the joint interpretation. Our SOM work flow includes (1) generation of a joint earth model described by so-called data vectors, (2) unsupervised learning or training, (3) analysis of the feature map by adopting image processing techniques, and (4) application of the knowledge to derive a lithological model which is based on the different geophysical parameters. We show the usage of the SOM work flow for a synthetic and a real data case study. Both tests rely on three geophysical properties: P velocity and vertical velocity gradient from seismic tomography, and electrical resistivity from MT inversion. The synthetic data are used as a benchmark test to demonstrate the performance of the SOM method. The real data were collected along a 40 km profile across parts of the NE German basin. The lithostratigraphic model from the joint SOM interpretation consists of eight litho-types and covers Cenozoic, Mesozoic and Paleozoic sediments down to 5 km depth. There is a remarkable agreement between the SOM based model and regional marker horizons interpolated from surrounding 2D industrial seismic data. The most interesting results include (1) distinct properties of the Jurassic (low P velocity gradients, low resistivities) interpreted as the signature of shaly clastics, and (2) a pattern within the Upper Permian Zechstein with decreased resistivities and increased P velocities within the salt depressions on the one hand, and increased resistivities and decreased P velocities in the salt pillows on the other hand. In our interpretation this pattern is related with flow of less dense salt matrix components into the pillows and remaining brittle evaporites within the depressions.

  12. Beyond-laboratory-scale prediction for channeling flows through subsurface rock fractures with heterogeneous aperture distributions revealed by laboratory evaluation

    NASA Astrophysics Data System (ADS)

    Ishibashi, Takuya; Watanabe, Noriaki; Hirano, Nobuo; Okamoto, Atsushi; Tsuchiya, Noriyoshi

    2015-01-01

    The present study evaluates aperture distributions and fluid flow characteristics for variously sized laboratory-scale granite fractures under confining stress. As a significant result of the laboratory investigation, the contact area in fracture plane was found to be virtually independent of scale. By combining this characteristic with the self-affine fractal nature of fracture surfaces, a novel method for predicting fracture aperture distributions beyond laboratory scale is developed. Validity of this method is revealed through reproduction of the results of laboratory investigation and the maximum aperture-fracture length relations, which are reported in the literature, for natural fractures. The present study finally predicts conceivable scale dependencies of fluid flows through joints (fractures without shear displacement) and faults (fractures with shear displacement). Both joint and fault aperture distributions are characterized by a scale-independent contact area, a scale-dependent geometric mean, and a scale-independent geometric standard deviation of aperture. The contact areas for joints and faults are approximately 60% and 40%. Changes in the geometric means of joint and fault apertures (µm), em, joint and em, fault, with fracture length (m), l, are approximated by em, joint = 1 × 102 l0.1 and em, fault = 1 × 103 l0.7, whereas the geometric standard deviations of both joint and fault apertures are approximately 3. Fluid flows through both joints and faults are characterized by formations of preferential flow paths (i.e., channeling flows) with scale-independent flow areas of approximately 10%, whereas the joint and fault permeabilities (m2), kjoint and kfault, are scale dependent and are approximated as kjoint = 1 × 10-12 l0.2 and kfault = 1 × 10-8 l1.1.

  13. Interservice Availability of Multiservice Used Items.

    DTIC Science & Technology

    1999-05-14

    Assistant Deputy Under Secretary of Defense (Materiel and Distribution Management ) and the Defense Logistics Agency concurred or partially concurred with...Secretary of Defense (Materiel and Distribution Management ) Comments 19 Joint Logistics Commanders Joint Secretariat Comments 22 Defense Logistics Agency...Secretary of Defense (Materiel and Distribution Management ) Comments. The Acting Assistant Deputy Under Secretary partially concurred, stating that disposal

  14. Qualitative and Quantitative Proofs of Security Properties

    DTIC Science & Technology

    2013-04-01

    Naples, Italy (September 2012) – Australasian Joint Conference on Artifical Intelligence (December 2012). • Causality, Responsibility, and Blame...realistic solution concept, Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009), 2009, pp. 153–158. 17. J...Conference on Artificial Intelligence (AAAI-12), 2012, pp. 1917-1923. 29. J. Y. Halpern and S. Leung, Weighted sets of probabilities and minimax

  15. Infant Joint Attention, Neural Networks and Social Cognition

    PubMed Central

    Mundy, Peter; Jarrold, William

    2010-01-01

    Neural network models of attention can provide a unifying approach to the study of human cognitive and emotional development (Posner & Rothbart, 2007). This paper we argue that a neural networks approach to the infant development of joint attention can inform our understanding of the nature of human social learning, symbolic thought process and social cognition. At its most basic, joint attention involves the capacity to coordinate one’s own visual attention with that of another person. We propose that joint attention development involves increments in the capacity to engage in simultaneous or parallel processing of information about one’s own attention and the attention of other people. Infant practice with joint attention is both a consequence and organizer of the development of a distributed and integrated brain network involving frontal and parietal cortical systems. This executive distributed network first serves to regulate the capacity of infants to respond to and direct the overt behavior of other people in order to share experience with others through the social coordination of visual attention. In this paper we describe this parallel and distributed neural network model of joint attention development and discuss two hypotheses that stem from this model. One is that activation of this distributed network during coordinated attention enhances to depth of information processing and encoding beginning in the first year of life. We also propose that with development joint attention becomes internalized as the capacity to socially coordinate mental attention to internal representations. As this occurs the executive joint attention network makes vital contributions to the development of human symbolic thinking and social cognition. PMID:20884172

  16. Three-Dimensional Geometric Nonlinear Contact Stress Analysis of Riveted Joints

    NASA Technical Reports Server (NTRS)

    Shivakumar, Kunigal N.; Ramanujapuram, Vivek

    1998-01-01

    The problems associated with fatigue were brought into the forefront of research by the explosive decompression and structural failure of the Aloha Airlines Flight 243 in 1988. The structural failure of this airplane has been attributed to debonding and multiple cracking along the longitudinal lap splice riveted joint in the fuselage. This crash created what may be termed as a minor "Structural Integrity Revolution" in the commercial transport industry. Major steps have been taken by the manufacturers, operators and authorities to improve the structural airworthiness of the aging fleet of airplanes. Notwithstanding, this considerable effort there are still outstanding issues and concerns related to the formulation of Widespread Fatigue Damage which is believed to have been a contributing factor in the probable cause of the Aloha accident. The lesson from this accident was that Multiple-Site Damage (MSD) in "aging" aircraft can lead to extensive aircraft damage. A strong candidate in which MSD is highly probable to occur is the riveted lap joint.

  17. Preliminary results of fisheries investigation associated with Skylab-3

    NASA Technical Reports Server (NTRS)

    Savastano, K.; Pastula, E., Jr.; Woods, G.; Faller, K.

    1974-01-01

    The purpose of the 15-month investigation now in the analysis phase is to establish the feasibility of utilizing remotely sensed data acquired from aircraft and satellite platforms to provide information concerning the distribution and abundance of oceanic gamefish. Data from the test area, jointly acquired by private and professional fishermen and NASA and NOAA/NMFS elements, in the northeastern Gulf of Mexico has made possible the identification of significant environmental parameters for white marlin. Predictive models based on catch data and surface truth information have been developed and have demonstrated potential for reducing search significantly by identifying areas which have a high probability of being productive. Three of the parameters utilized by the model, chlorophyll-a, sea surface temperature and turbidity have been inferred from aircraft sensor data.

  18. Impact of meteorological inflow uncertainty on tracer transport and source estimation in urban atmospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, Donald D.; Gowardhan, Akshay; Cameron-Smith, Philip

    2015-08-08

    Here, a computational Bayesian inverse technique is used to quantify the effects of meteorological inflow uncertainty on tracer transport and source estimation in a complex urban environment. We estimate a probability distribution of meteorological inflow by comparing wind observations to Monte Carlo simulations from the Aeolus model. Aeolus is a computational fluid dynamics model that simulates atmospheric and tracer flow around buildings and structures at meter-scale resolution. Uncertainty in the inflow is propagated through forward and backward Lagrangian dispersion calculations to determine the impact on tracer transport and the ability to estimate the release location of an unknown source. Ourmore » uncertainty methods are compared against measurements from an intensive observation period during the Joint Urban 2003 tracer release experiment conducted in Oklahoma City.« less

  19. Investigation of non-linear contact for a clearance-fit bolt in a graphite/epoxy laminate

    NASA Technical Reports Server (NTRS)

    Prabhakaran, R.; Naik, R. A.

    1986-01-01

    Numerous analytical studies have been published for the nonlinear load-contact variations in clearance-fit bolted joints. In these studies, stress distributions have been obtained and failure predictions have been made. However, very little experimental work has been reported regarding the contact or the stresses. This paper describes a fiber-optic technique for measuring the angle of contact in a clearance-fit bolt-loaded hole. Measurements of the contact angle have been made in a quasi-isotropic graphite-epoxy laminate by the optical as well as an electrical technique, and the results have been compared with those obtained from a finite-element analysis. The results from the two experimental techniques show excellent agreement; the finite-element results show some discrepancy, probably due to the interfacial frictions.

  20. Joint-probability Analysis of the Natural Variability of Tropical Oceanic Precipitation

    NASA Technical Reports Server (NTRS)

    Yuter, Sandra E.

    2004-01-01

    Data projects pertaining to KWAJEX are described.Data sets delivered to the Goddard Distributed Active Archive Center (DAAC): 1) Kwajalein Experiment (KWAJEX) S-band calibrated, quality-controlled radar data, 1221 1 files of 3D volume data and 6832 files of 2D low-level reflectivity. 2) Raw and quality-control- processed versions of University of Washington Joss-Waldvogel disdrometer measurements obtained during KWAJEX. 3) A time series of synoptic-scale gif images of the Geostationary Meteorological Satellite (GMS) IR data for the KWAJEX period. The GMS satellite data set for the KWAJEX period was obtained from the University of Wisconsin and reprocessed into format amenable for comparison with radar data.Aircraft microphysics flight-leg definitions for all aircraft and all missions during KWAJEX were completed to facilitate microphysics data processing.

  1. Attributing Asymmetric Productivity Responses to Internal Ecosystem Dynamics and External Drivers Using Probabilistic Models

    NASA Astrophysics Data System (ADS)

    Parolari, A.; Goulden, M.

    2017-12-01

    A major challenge to interpreting asymmetric changes in ecosystem productivity is the attribution of these changes to external climate forcing or to internal ecophysiological processes that respond to these drivers (e.g., photosynthesis response to drying soil). For example, positive asymmetry in productivity can result from either positive skewness in the distribution of annual rainfall amount or from negative curvature in the productivity response to annual rainfall. To analyze the relative influences of climate and ecosystem dynamics on both positive and negative asymmetry in multi-year ANPP experiments, we use a multi-scale coupled ecosystem water-carbon model to interpret field experimental results that span gradients of rainfall skewness and ANPP response curvature. The model integrates rainfall variability, soil moisture dynamics, and net carbon assimilation from the daily to inter-annual scales. From the underlying physical basis of the model, we compute the joint probability distribution of the minimum and maximum ANPP for an annual ANPP experiment of N years. The distribution is used to estimate the likelihood that either positive or negative asymmetry will be observed in an experiment, given the annual rainfall distribution and the ANPP response curve. We estimate the total asymmetry as the mode of this joint distribution and the relative contribution attributable to rainfall skewness as the mode for a linear ANPP response curve. Applied to data from several long-term ANPP experiments, we find that there is a wide range of observed ANPP asymmetry (positive and negative) and a spectrum of contributions from internal and external factors. We identify the soil water holding capacity relative to the mean rain event depth as a critical ecosystem characteristic that controls the non-linearity of the ANPP response and positive curvature at high rainfall. Further, the seasonal distribution of rainfall is shown to control the presence or absence of negative curvature at low rainfall. Therefore, a combination of rooting depth, soil texture, and climate seasonality contribute to ANPP response curvature and its contribution to overall observed asymmetry.

  2. Geochemical Characterization Using Geophysical Data and Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Chen, J.; Hubbard, S.; Rubin, Y.; Murray, C.; Roden, E.; Majer, E.

    2002-12-01

    Although the spatial distribution of geochemical parameters is extremely important for many subsurface remediation approaches, traditional characterization of those parameters is invasive and laborious, and thus is rarely performed sufficiently to describe natural hydrogeological variability at the field-scale. This study is an effort to jointly use multiple sources of information, including noninvasive geophysical data, for geochemical characterization of the saturated and anaerobic portion of the DOE South Oyster Bacterial Transport Site in Virginia. Our data set includes hydrogeological and geochemical measurements from five boreholes and ground-penetrating radar (GPR) and seismic tomographic data along two profiles that traverse the boreholes. The primary geochemical parameters are the concentrations of extractable ferrous iron Fe(II) and ferric iron Fe(III). Since iron-reducing bacteria can reduce Fe(III) to Fe(II) under certain conditions, information about the spatial distributions of Fe(II) and Fe(III) may indicate both where microbial iron reduction has occurred and in which zone it is likely to occur in the future. In addition, as geochemical heterogeneity influences bacterial transport and activity, estimates of the geochemical parameters provide important input to numerical flow and contaminant transport models geared toward bioremediation. Motivated by our previous research, which demonstrated that crosshole geophysical data could be very useful for estimating hydrogeological parameters, we hypothesize in this study that geochemical and geophysical parameters may be linked through their mutual dependence on hydrogeological parameters such as lithofacies. We attempt to estimate geochemical parameters using both hydrogeological and geophysical measurements in a Bayesian framework. Within the two-dimensional study domain (12m x 6m vertical cross section divided into 0.25m x 0.25m pixels), geochemical and hydrogeological parameters were considered as data if they were available from direct measurements or as variables otherwise. To estimate the geochemical parameters, we first assigned a prior model for each variable and a likelihood model for each type of data, which together define posterior probability distributions for each variable on the domain. Since the posterior probability distribution may involve hundreds of variables, we used a Markov Chain Monte Carlo (MCMC) method to explore each variable by generating and subsequently evaluating hundreds of realizations. Results from this case study showed that although geophysical attributes are not necessarily directly related to geochemical parameters, geophysical data could be very useful for providing accurate and high-resolution information about geochemical parameter distribution through their joint and indirect connections with hydrogeological properties such as lithofacies. This case study also demonstrated that MCMC methods were particularly useful for geochemical parameter estimation using geophysical data because they allow incorporation into the procedure of spatial correlation information, measurement errors, and cross correlations among different types of parameters.

  3. Joint Center for Operational Analysis Journal. Volume 12, Issue 2, Summer 2010

    DTIC Science & Technology

    2010-01-01

    Peixoto. In 19X7. then-Major Keen attended Bra- zil’s Command and General Staff Course in Rio de Janeiro . Bra- zil. In 1988, then Captain Floriano...controlling DoD office). • DISTRIBUTION STATEMENT E . Distribution authorized to DoD Components only (fill in reason) (date of determination). Other... basic joint functions that integrate, synchronize, and direct joint operations, which arc: command and control, intelligence, fires, movement and

  4. Bayesian Inference of High-Dimensional Dynamical Ocean Models

    NASA Astrophysics Data System (ADS)

    Lin, J.; Lermusiaux, P. F. J.; Lolla, S. V. T.; Gupta, A.; Haley, P. J., Jr.

    2015-12-01

    This presentation addresses a holistic set of challenges in high-dimension ocean Bayesian nonlinear estimation: i) predict the probability distribution functions (pdfs) of large nonlinear dynamical systems using stochastic partial differential equations (PDEs); ii) assimilate data using Bayes' law with these pdfs; iii) predict the future data that optimally reduce uncertainties; and (iv) rank the known and learn the new model formulations themselves. Overall, we allow the joint inference of the state, equations, geometry, boundary conditions and initial conditions of dynamical models. Examples are provided for time-dependent fluid and ocean flows, including cavity, double-gyre and Strait flows with jets and eddies. The Bayesian model inference, based on limited observations, is illustrated first by the estimation of obstacle shapes and positions in fluid flows. Next, the Bayesian inference of biogeochemical reaction equations and of their states and parameters is presented, illustrating how PDE-based machine learning can rigorously guide the selection and discovery of complex ecosystem models. Finally, the inference of multiscale bottom gravity current dynamics is illustrated, motivated in part by classic overflows and dense water formation sites and their relevance to climate monitoring and dynamics. This is joint work with our MSEAS group at MIT.

  5. Joint modeling and registration of cell populations in cohorts of high-dimensional flow cytometric data.

    PubMed

    Pyne, Saumyadipta; Lee, Sharon X; Wang, Kui; Irish, Jonathan; Tamayo, Pablo; Nazaire, Marc-Danie; Duong, Tarn; Ng, Shu-Kay; Hafler, David; Levy, Ronald; Nolan, Garry P; Mesirov, Jill; McLachlan, Geoffrey J

    2014-01-01

    In biomedical applications, an experimenter encounters different potential sources of variation in data such as individual samples, multiple experimental conditions, and multivariate responses of a panel of markers such as from a signaling network. In multiparametric cytometry, which is often used for analyzing patient samples, such issues are critical. While computational methods can identify cell populations in individual samples, without the ability to automatically match them across samples, it is difficult to compare and characterize the populations in typical experiments, such as those responding to various stimulations or distinctive of particular patients or time-points, especially when there are many samples. Joint Clustering and Matching (JCM) is a multi-level framework for simultaneous modeling and registration of populations across a cohort. JCM models every population with a robust multivariate probability distribution. Simultaneously, JCM fits a random-effects model to construct an overall batch template--used for registering populations across samples, and classifying new samples. By tackling systems-level variation, JCM supports practical biomedical applications involving large cohorts. Software for fitting the JCM models have been implemented in an R package EMMIX-JCM, available from http://www.maths.uq.edu.au/~gjm/mix_soft/EMMIX-JCM/.

  6. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  7. [Study on the effect of vertebrae semi-dislocation on the stress distribution in facet joint and interuertebral disc of patients with cervical syndrome based on the three dimensional finite element model].

    PubMed

    Zhang, Ming-cai; Lü, Si-zhe; Cheng, Ying-wu; Gu, Li-xu; Zhan, Hong-sheng; Shi, Yin-yu; Wang, Xiang; Huang, Shi-rong

    2011-02-01

    To study the effect of vertebrae semi-dislocation on the stress distribution in facet joint and interuertebral disc of patients with cervical syndrome using three dimensional finite element model. A patient with cervical spondylosis was randomly chosen, who was male, 28 years old, and diagnosed as cervical vertebra semidislocation by dynamic and static palpation and X-ray, and scanned from C(1) to C(7) by 0.75 mm slice thickness of CT. Based on the CT data, the software was used to construct the three dimensional finite element model of cervical vertebra semidislocation (C(4)-C(6)). Based on the model,virtual manipulation was used to correct the vertebra semidislocation by the software, and the stress distribution was analyzed. The result of finite element analysis showed that the stress distribution of C(5-6) facet joint and intervertebral disc changed after virtual manipulation. The vertebra semidislocation leads to the abnormal stress distribution of facet joint and intervertebral disc.

  8. Numerical analysis of the accuracy of bivariate quantile distributions utilizing copulas compared to the GUM supplement 2 for oil pressure balance uncertainties

    NASA Astrophysics Data System (ADS)

    Ramnath, Vishal

    2017-11-01

    In the field of pressure metrology the effective area is Ae = A0 (1 + λP) where A0 is the zero-pressure area and λ is the distortion coefficient and the conventional practise is to construct univariate probability density functions (PDFs) for A0 and λ. As a result analytical generalized non-Gaussian bivariate joint PDFs has not featured prominently in pressure metrology. Recently extended lambda distribution based quantile functions have been successfully utilized for summarizing univariate arbitrary PDF distributions of gas pressure balances. Motivated by this development we investigate the feasibility and utility of extending and applying quantile functions to systems which naturally exhibit bivariate PDFs. Our approach is to utilize the GUM Supplement 1 methodology to solve and generate Monte Carlo based multivariate uncertainty data for an oil based pressure balance laboratory standard that is used to generate known high pressures, and which are in turn cross-floated against another pressure balance transfer standard in order to deduce the transfer standard's respective area. We then numerically analyse the uncertainty data by formulating and constructing an approximate bivariate quantile distribution that directly couples A0 and λ in order to compare and contrast its accuracy to an exact GUM Supplement 2 based uncertainty quantification analysis.

  9. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    PubMed

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.

  10. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions

    PubMed Central

    Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas

    2015-01-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191

  11. Random Partition Distribution Indexed by Pairwise Information

    PubMed Central

    Dahl, David B.; Day, Ryan; Tsai, Jerry W.

    2017-01-01

    We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318

  12. A bayesian approach to classification criteria for spectacled eiders

    USGS Publications Warehouse

    Taylor, B.L.; Wade, P.R.; Stehn, R.A.; Cochrane, J.F.

    1996-01-01

    To facilitate decisions to classify species according to risk of extinction, we used Bayesian methods to analyze trend data for the Spectacled Eider, an arctic sea duck. Trend data from three independent surveys of the Yukon-Kuskokwim Delta were analyzed individually and in combination to yield posterior distributions for population growth rates. We used classification criteria developed by the recovery team for Spectacled Eiders that seek to equalize errors of under- or overprotecting the species. We conducted both a Bayesian decision analysis and a frequentist (classical statistical inference) decision analysis. Bayesian decision analyses are computationally easier, yield basically the same results, and yield results that are easier to explain to nonscientists. With the exception of the aerial survey analysis of the 10 most recent years, both Bayesian and frequentist methods indicated that an endangered classification is warranted. The discrepancy between surveys warrants further research. Although the trend data are abundance indices, we used a preliminary estimate of absolute abundance to demonstrate how to calculate extinction distributions using the joint probability distributions for population growth rate and variance in growth rate generated by the Bayesian analysis. Recent apparent increases in abundance highlight the need for models that apply to declining and then recovering species.

  13. Fingerprint multicast in secure video streaming.

    PubMed

    Zhao, H Vicky; Liu, K J Ray

    2006-01-01

    Digital fingerprinting is an emerging technology to protect multimedia content from illegal redistribution, where each distributed copy is labeled with unique identification information. In video streaming, huge amount of data have to be transmitted to a large number of users under stringent latency constraints, so the bandwidth-efficient distribution of uniquely fingerprinted copies is crucial. This paper investigates the secure multicast of anticollusion fingerprinted video in streaming applications and analyzes their performance. We first propose a general fingerprint multicast scheme that can be used with most spread spectrum embedding-based multimedia fingerprinting systems. To further improve the bandwidth efficiency, we explore the special structure of the fingerprint design and propose a joint fingerprint design and distribution scheme. From our simulations, the two proposed schemes can reduce the bandwidth requirement by 48% to 87%, depending on the number of users, the characteristics of video sequences, and the network and computation constraints. We also show that under the constraint that all colluders have the same probability of detection, the embedded fingerprints in the two schemes have approximately the same collusion resistance. Finally, we propose a fingerprint drift compensation scheme to improve the quality of the reconstructed sequences at the decoder's side without introducing extra communication overhead.

  14. Stimulus-dependent Maximum Entropy Models of Neural Population Codes

    PubMed Central

    Segev, Ronen; Schneidman, Elad

    2013-01-01

    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population. PMID:23516339

  15. Gibbs Ensembles for Nearly Compatible and Incompatible Conditional Models

    PubMed Central

    Chen, Shyh-Huei; Wang, Yuchung J.

    2010-01-01

    Gibbs sampler has been used exclusively for compatible conditionals that converge to a unique invariant joint distribution. However, conditional models are not always compatible. In this paper, a Gibbs sampling-based approach — Gibbs ensemble —is proposed to search for a joint distribution that deviates least from a prescribed set of conditional distributions. The algorithm can be easily scalable such that it can handle large data sets of high dimensionality. Using simulated data, we show that the proposed approach provides joint distributions that are less discrepant from the incompatible conditionals than those obtained by other methods discussed in the literature. The ensemble approach is also applied to a data set regarding geno-polymorphism and response to chemotherapy in patients with metastatic colorectal PMID:21286232

  16. Dependent Neyman type A processes based on common shock Poisson approach

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Kadilar, Cem

    2016-04-01

    The Neyman type A process is used for describing clustered data since the Poisson process is insufficient for clustering of events. In a multivariate setting, there may be dependencies between multivarite Neyman type A processes. In this study, dependent form of the Neyman type A process is considered under common shock approach. Then, the joint probability function are derived for the dependent Neyman type A Poisson processes. Then, an application based on forest fires in Turkey are given. The results show that the joint probability function of the dependent Neyman type A processes, which is obtained in this study, can be a good tool for the probabilistic fitness for the total number of burned trees in Turkey.

  17. Traumatic synovitis in a classical guitarist: a study of joint laxity.

    PubMed

    Bird, H A; Wright, V

    1981-04-01

    A classical guitarist performing for at least 5 hours each day developed a traumatic synovitis at the left wrist joint that was first erroneously considered to be rheumatoid arthritis. Comparison with members of the same guitar class suggested that unusual joint laxity of the fingers and wrist, probably inherited from the patient's father, was of more importance in the aetiology of the synovitis than a wide range of movement acquired by regular practice. Hyperextension of the metacarpophalangeal joint of the left index finger, quantified by the hyperextensometer, was less marked in the guitarists than in 100 normal individuals. This may be attributed to greater muscular control of the fingers. Lateral instability in the loaded joint may be the most important factor in the aetiology of traumatic synovitis.

  18. Traumatic synovitis in a classical guitarist: a study of joint laxity.

    PubMed Central

    Bird, H A; Wright, V

    1981-01-01

    A classical guitarist performing for at least 5 hours each day developed a traumatic synovitis at the left wrist joint that was first erroneously considered to be rheumatoid arthritis. Comparison with members of the same guitar class suggested that unusual joint laxity of the fingers and wrist, probably inherited from the patient's father, was of more importance in the aetiology of the synovitis than a wide range of movement acquired by regular practice. Hyperextension of the metacarpophalangeal joint of the left index finger, quantified by the hyperextensometer, was less marked in the guitarists than in 100 normal individuals. This may be attributed to greater muscular control of the fingers. Lateral instability in the loaded joint may be the most important factor in the aetiology of traumatic synovitis. Images PMID:7224687

  19. A brief introduction to probability.

    PubMed

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  20. New technique of skin embedded wire double-sided laser beam welding

    NASA Astrophysics Data System (ADS)

    Han, Bing; Tao, Wang; Chen, Yanbin

    2017-06-01

    In the aircraft industry, double-sided laser beam welding is an approved method for producing skin-stringer T-joints on aircraft fuselage panels. As for the welding of new generation aluminum-lithium alloys, however, this technique is limited because of high hot cracking susceptibility and strengthening elements' uneven distributions within weld. In the present study, a new technique of skin embedded wire double-sided laser beam welding (LBW) has been developed to fabricate T-joints consisting of 2.0 mm thick 2060-T8/2099-T83 aluminum-lithium alloys using eutectic alloy AA4047 filler wire. Necessary dimension parameters of the novel groove were reasonably designed for achieving crack-free welds. Comparisons were made between the new technique welded T-joint and conventional T-joint mainly on microstructure, hot crack, elements distribution features and mechanical properties within weld. Excellent crack-free microstructure, uniform distribution of silicon and superior tensile properties within weld were found in the new skin embedded wire double-sided LBW T-joints.

  1. Background for Joint Systems Aspects of AIR 6000

    DTIC Science & Technology

    2000-04-01

    Checkland’s Soft Systems Methodology [7, 8,9]. The analytical techniques that are proposed for joint systems work are based on calculating probability...Supporting Global Interests 21 DSTO-CR-0155 SLMP Structural Life Management Plan SOW Stand-Off Weapon SSM Soft Systems Methodology UAV Uninhabited Aerial... Systems Methodology in Action, John Wiley & Sons, Chichester, 1990. [101 Pearl, Judea, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible

  2. Arthroscopic Management of Scaphoid-Trapezium-Trapezoid Joint Arthritis.

    PubMed

    Pegoli, Loris; Pozzi, Alessandro

    2017-11-01

    Scaphoid-trapezium-trapezoid (STT) joint arthritis is a common condition consisting of pain on the radial side of the wrist and base of the thumb, swelling, and tenderness over the STT joint. Common symptoms are loss of grip strength and thumb function. There are several treatments, from symptomatic conservative treatment to surgical solutions, such as arthrodesis, arthroplasties, and prosthesis implant. The role of arthroscopy has grown and is probably the best treatment of this condition. Advantages of arthroscopic management of STT arthritis are faster recovery, better view of the joint during surgery, and possibility of creating less damage to the capsular and ligamentous structures. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Distribution of lactate dehydrogenase in healthy and degenerative canine stifle joint cartilage.

    PubMed

    Walter, Eveline L C; Spreng, David; Schmöckel, Hugo; Schawalder, Peter; Tschudi, Peter; Friess, Armin E; Stoffel, Michael H

    2007-07-01

    In dogs, degenerative joint diseases (DJD) have been shown to be associated with increased lactate dehydrogenase (LDH) activity in the synovial fluid. The goal of this study was to examine healthy and degenerative stifle joints in order to clarify the origin of LDH in synovial fluid. In order to assess the distribution of LDH, cartilage samples from healthy and degenerative knee joints were investigated by means of light and transmission electron microscopy in conjunction with immunolabeling and enzyme cytochemistry. Morphological analysis confirmed DJD. All techniques used corroborated the presence of LDH in chondrocytes and in the interterritorial matrix of healthy and degenerative stifle joints. Although enzymatic activity of LDH was clearly demonstrated in the territorial matrix by means of the tetrazolium-formazan reaction, immunolabeling for LDH was missing in this region. With respect to the distribution of LDH in the interterritorial matrix, a striking decrease from superficial to deeper layers was present in healthy dogs but was missing in affected joints. These results support the contention that LDH in synovial fluid of degenerative joints originates from cartilage. Therefore, we suggest that (1) LDH is transferred from chondrocytes to ECM in both healthy dogs and dogs with degenerative joint disease and that (2) in degenerative joints, LDH is released from chondrocytes and the ECM into synovial fluid through abrasion of cartilage as well as through enhanced diffusion as a result of increased water content and degradation of collagen.

  4. optBINS: Optimal Binning for histograms

    NASA Astrophysics Data System (ADS)

    Knuth, Kevin H.

    2018-03-01

    optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.

  5. Nonstationary envelope process and first excursion probability.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.

  6. Income-related health transfers principles and orderings of joint distributions of income and health.

    PubMed

    Khaled, Mohamad A; Makdissi, Paul; Yazbeck, Myra

    2018-01-01

    The objective of this article is to provide the analyst with the necessary tools that allow for a robust ordering of joint distributions of health and income. We contribute to the literature on the measurement and inference of socioeconomic health inequality in three distinct but complementary ways. First, we provide a formalization of the socioeconomic health inequality-specific ethical principle introduced by Erreygers et al. (2012) . Second, we propose new graphical tools and dominance tests for the identification of robust orderings of joint distributions of income and health associated with this new ethical principle. Finally, based on both pro-poor and pro-extreme ranks ethical principles we address a very important aspect of dominance literature: the inference. To illustrate the empirical relevance of the proposed approach, we compare joint distributions of income and a health-related behavior in the United States in 1997 and 2014. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Height probabilities in the Abelian sandpile model on the generalized finite Bethe lattice

    NASA Astrophysics Data System (ADS)

    Chen, Haiyan; Zhang, Fuji

    2013-08-01

    In this paper, we study the sandpile model on the generalized finite Bethe lattice with a particular boundary condition. Using a combinatorial method, we give the exact expressions for all single-site probabilities and some two-site joint probabilities. As a by-product, we prove that the height probabilities of bulk vertices are all the same for the Bethe lattice with certain given boundary condition, which was found from numerical evidence by Grassberger and Manna ["Some more sandpiles," J. Phys. (France) 51, 1077-1098 (1990)], 10.1051/jphys:0199000510110107700 but without a proof.

  8. A Parallel and Distributed Processing Model of Joint Attention, Social-Cognition and Autism

    PubMed Central

    Mundy, Peter; Sullivan, Lisa; Mastergeorge, Ann M.

    2009-01-01

    Scientific Abstract The impaired development of joint attention is a cardinal feature of autism. Therefore, understanding the nature of joint attention is a central to research on this disorder. Joint attention may be best defined in terms of an information processing system that begins to develop by 4–6 months of age. This system integrates the parallel processing of internal information about one’s own visual attention with external information about the visual attention of other people. This type of joint encoding of information about self and other attention requires the activation of a distributed anterior and posterior cortical attention network. Genetic regulation, in conjunction with self-organizing behavioral activity guides the development of functional connectivity in this network. With practice in infancy the joint processing of self-other attention becomes automatically engaged as an executive function. It can be argued that this executive joint-attention is fundamental to human learning, as well as the development of symbolic thought, social-cognition and social-competence throughout the life span. One advantage of this parallel and distributed processing model of joint attention (PDPM) is that it directly connects theory on social pathology to a range of phenomenon in autism associated with neural connectivity, constructivist and connectionist models of cognitive development, early intervention, activity-dependent gene expression, and atypical ocular motor control. PMID:19358304

  9. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  10. Patient and implant survival following joint replacement because of metastatic bone disease

    PubMed Central

    2013-01-01

    Background Patients suffering from a pathological fracture or painful bony lesion because of metastatic bone disease often benefit from a total joint replacement. However, these are large operations in patients who are often weak. We examined the patient survival and complication rates after total joint replacement as the treatment for bone metastasis or hematological diseases of the extremities. Patients and methods 130 patients (mean age 64 (30–85) years, 76 females) received 140 joint replacements due to skeletal metastases (n = 114) or hematological disease (n = 16) during the period 2003–2008. 21 replaced joints were located in the upper extremities and 119 in the lower extremities. Clinical and survival data were extracted from patient files and various registers. Results The probability of patient survival was 51% (95% CI: 42–59) after 6 months, 39% (CI: 31–48) after 12 months, and 29% (CI: 21–37) after 24 months. The following surgical complications were seen (8 of which led to additional surgery): 2–5 hip dislocations (n = 8), deep infection (n = 3), peroneal palsy (n = 2), a shoulder prosthesis penetrating the skin (n = 1), and disassembly of an elbow prosthesis (n = 1). The probability of avoiding all kinds of surgery related to the implanted prosthesis was 94% (CI: 89–99) after 1 year and 92% (CI: 85–98) after 2 years. Conclusion Joint replacement operations because of metastatic bone disease do not appear to have given a poorer rate of patient survival than other types of surgical treatment, and the reoperation rate was low. PMID:23530874

  11. Infant joint attention, neural networks and social cognition.

    PubMed

    Mundy, Peter; Jarrold, William

    2010-01-01

    Neural network models of attention can provide a unifying approach to the study of human cognitive and emotional development (Posner & Rothbart, 2007). In this paper we argue that a neural network approach to the infant development of joint attention can inform our understanding of the nature of human social learning, symbolic thought process and social cognition. At its most basic, joint attention involves the capacity to coordinate one's own visual attention with that of another person. We propose that joint attention development involves increments in the capacity to engage in simultaneous or parallel processing of information about one's own attention and the attention of other people. Infant practice with joint attention is both a consequence and an organizer of the development of a distributed and integrated brain network involving frontal and parietal cortical systems. This executive distributed network first serves to regulate the capacity of infants to respond to and direct the overt behavior of other people in order to share experience with others through the social coordination of visual attention. In this paper we describe this parallel and distributed neural network model of joint attention development and discuss two hypotheses that stem from this model. One is that activation of this distributed network during coordinated attention enhances the depth of information processing and encoding beginning in the first year of life. We also propose that with development, joint attention becomes internalized as the capacity to socially coordinate mental attention to internal representations. As this occurs the executive joint attention network makes vital contributions to the development of human symbolic thinking and social cognition. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. On Statistics of Bi-Orthogonal Eigenvectors in Real and Complex Ginibre Ensembles: Combining Partial Schur Decomposition with Supersymmetry

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.

    2018-06-01

    We suggest a method of studying the joint probability density (JPD) of an eigenvalue and the associated `non-orthogonality overlap factor' (also known as the `eigenvalue condition number') of the left and right eigenvectors for non-selfadjoint Gaussian random matrices of size {N× N} . First we derive the general finite N expression for the JPD of a real eigenvalue {λ} and the associated non-orthogonality factor in the real Ginibre ensemble, and then analyze its `bulk' and `edge' scaling limits. The ensuing distribution is maximally heavy-tailed, so that all integer moments beyond normalization are divergent. A similar calculation for a complex eigenvalue z and the associated non-orthogonality factor in the complex Ginibre ensemble is presented as well and yields a distribution with the finite first moment. Its `bulk' scaling limit yields a distribution whose first moment reproduces the well-known result of Chalker and Mehlig (Phys Rev Lett 81(16):3367-3370, 1998), and we provide the `edge' scaling distribution for this case as well. Our method involves evaluating the ensemble average of products and ratios of integer and half-integer powers of characteristic polynomials for Ginibre matrices, which we perform in the framework of a supersymmetry approach. Our paper complements recent studies by Bourgade and Dubach (The distribution of overlaps between eigenvectors of Ginibre matrices, 2018. arXiv:1801.01219).

  13. Non-Fickian dispersion of groundwater age

    PubMed Central

    Engdahl, Nicholas B.; Ginn, Timothy R.; Fogg, Graham E.

    2014-01-01

    We expand the governing equation of groundwater age to account for non-Fickian dispersive fluxes using continuous random walks. Groundwater age is included as an additional (fifth) dimension on which the volumetric mass density of water is distributed and we follow the classical random walk derivation now in five dimensions. The general solution of the random walk recovers the previous conventional model of age when the low order moments of the transition density functions remain finite at their limits and describes non-Fickian age distributions when the transition densities diverge. Previously published transition densities are then used to show how the added dimension in age affects the governing differential equations. Depending on which transition densities diverge, the resulting models may be nonlocal in time, space, or age and can describe asymptotic or pre-asymptotic dispersion. A joint distribution function of time and age transitions is developed as a conditional probability and a natural result of this is that time and age must always have identical transition rate functions. This implies that a transition density defined for age can substitute for a density in time and this has implications for transport model parameter estimation. We present examples of simulated age distributions from a geologically based, heterogeneous domain that exhibit non-Fickian behavior and show that the non-Fickian model provides better descriptions of the distributions than the Fickian model. PMID:24976651

  14. Fracture network of the Ferron Sandstone Member of the Mancos Shale, east-central Utah, USA

    USGS Publications Warehouse

    Condon, S.M.

    2003-01-01

    The fracture network at the outcrop of the Ferron Sandstone Member of the Mancos Shale was studied to gain an understanding of the tectonic history of the region and to contribute data to studies of gas and water transmissivity related to the occurrence and production of coal-bed methane. About 1900 fracture readings were made at 40 coal outcrops and 62 sandstone outcrops in the area from Willow Springs Wash in the south to Farnham dome in the north of the study area in east-central Utah.Two sets of regional, vertical to nearly vertical, systematic face cleats were identified in Ferron coals. A northwest-striking set trends at a mean azimuth of 321??, and a northeast-striking set has a mean azimuth of 55??. Cleats were observed in all coal outcrops examined and are closely spaced and commonly coated with thin films of iron oxide.Two sets of regional, systematic joint sets in sandstone were also identified and have mean azimuths of 321?? and 34??. The joints of each set are planar, long, and extend vertically to nearly vertically through multiple beds; the northeast-striking set is more prevalent than the northwest-striking set. In some places, joints of the northeast-striking set occur in closely spaced clusters, or joint zones, flanked by unjointed rock. Both sets are mineralized with iron oxide and calcite, and the northwest-striking set is commonly tightly cemented, which allowed the northeast-striking set to propagate across it. All cleats and joints of these sets are interpreted as opening-mode (mode I) fractures. Abutting relations indicate that the northwest-striking cleats and joints formed first and were later overprinted by the northeast-striking cleats and joints. Burial curves constructed for the Ferron indicate rapid initial burial after deposition. The Ferron reached a depth of 3000 ft (1000 m) within 5.2 million years (m.y.), and this is considered a minimum depth and time for development of cleats and joints. The Sevier orogeny produced southeast-directed compressional stress at this time and is thought to be the likely mechanism for the northwest-striking systematic cleats and joints. The onset of the Laramide orogeny occurred at about 75 Ma, within 13.7 m.y. of burial, and is thought to be the probable mechanism for development of the northeast-striking systematic cleats and joints. Uplift of the Ferron in the late Tertiary contributed to development of butt cleats and secondary cross-joints and probably enhanced previously formed fracture sets. Using a study of the younger Blackhawk Formation as an analogy, the fracture pattern of the Ferron in the subsurface is probably similar to that at the surface, at least as far west as the Paradise fault and Joe's Valley graben. Farther to the west, on the Wasatch Plateau, the orientations of Ferron fractures may diverge from those measured at the outcrop. ?? 2003 Elsevier B.V. All rights reserved.

  15. The global impact distribution of Near-Earth objects

    NASA Astrophysics Data System (ADS)

    Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter M.

    2016-02-01

    Asteroids that could collide with the Earth are listed on the publicly available Near-Earth object (NEO) hazard web sites maintained by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The impact probability distribution of 69 potentially threatening NEOs from these lists that produce 261 dynamically distinct impact instances, or Virtual Impactors (VIs), were calculated using the Asteroid Risk Mitigation and Optimization Research (ARMOR) tool in conjunction with OrbFit. ARMOR projected the impact probability of each VI onto the surface of the Earth as a spatial probability distribution. The projection considers orbit solution accuracy and the global impact probability. The method of ARMOR is introduced and the tool is validated against two asteroid-Earth collision cases with objects 2008 TC3 and 2014 AA. In the analysis, the natural distribution of impact corridors is contrasted against the impact probability distribution to evaluate the distributions' conformity with the uniform impact distribution assumption. The distribution of impact corridors is based on the NEO population and orbital mechanics. The analysis shows that the distribution of impact corridors matches the common assumption of uniform impact distribution and the result extends the evidence base for the uniform assumption from qualitative analysis of historic impact events into the future in a quantitative way. This finding is confirmed in a parallel analysis of impact points belonging to a synthetic population of 10,006 VIs. Taking into account the impact probabilities introduced significant variation into the results and the impact probability distribution, consequently, deviates markedly from uniformity. The concept of impact probabilities is a product of the asteroid observation and orbit determination technique and, thus, represents a man-made component that is largely disconnected from natural processes. It is important to consider impact probabilities because such information represents the best estimate of where an impact might occur.

  16. A method for the automated construction of the joint system of equations to solve the problem of the flow distribution in hydraulic networks

    NASA Astrophysics Data System (ADS)

    Novikov, A. E.

    1993-10-01

    There are several methods of solving the problem of the flow distribution in hydraulic networks. But all these methods have no mathematical tools for forming joint systems of equations to solve this problem. This paper suggests a method of constructing joint systems of equations to calculate hydraulic circuits of the arbitrary form. The graph concept, according to Kirchhoff, has been introduced.

  17. More than the sum of the parts: forest climate response from joint species distribution models

    Treesearch

    James S. Clark; Alan E. Gelfand; Christopher W. Woodall; Kai Zhu

    2014-01-01

    The perceived threat of climate change is often evaluated from species distribution models that are fitted to many species independently and then added together. This approach ignores the fact that species are jointly distributed and limit one another. Species respond to the same underlying climatic variables, and the abundance of any one species can be constrained by...

  18. Joint Experimentation on Scalable Parallel Processors (JESPP)

    DTIC Science & Technology

    2006-04-01

    made use of local embedded relational databases, implemented using sqlite on each node of an SPP to execute queries and return results via an ad hoc ...rl.af.mil 12a. DISTRIBUTION / AVAILABILITY STATEENT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. 12b. DISTRIBUTION CODE 13. ABSTRACT...Experimentation Directorate (J9) required expansion of its joint semi-automated forces (JSAF) code capabilities; including number of entities, behavior complexity

  19. Kinematic adaptations to tripedal locomotion in dogs.

    PubMed

    Goldner, B; Fuchs, A; Nolte, I; Schilling, N

    2015-05-01

    Limb amputation often represents the only treatment option for canine patients with certain diseases or injuries of the appendicular system. Previous studies have investigated adaptations to tripedal locomotion in dogs but there is a lack of understanding of biomechanical compensatory mechanisms. This study evaluated the kinematic differences between quadrupedal and tripedal locomotion in nine healthy dogs running on a treadmill. The loss of the right pelvic limb was simulated using an Ehmer sling. Kinematic gait analysis included spatio-temporal comparisons of limb, joint and segment angles of the remaining pelvic and both thoracic limbs. The following key parameters were compared between quadrupedal and tripedal conditions: angles at touch-down and lift-off, minimum and maximum joint angles, plus range of motion. Significant differences in angular excursion were identified in several joints of each limb during both stance and swing phases. The most pronounced differences concerned the remaining pelvic limb, followed by the contralateral thoracic limb and, to a lesser degree, the ipsilateral thoracic limb. The thoracic limbs were, in general, more retracted, consistent with pelvic limb unloading and previous observations of bodyweight re-distribution in amputees. Proximal limb segments showed more distinct changes than distal ones. Particularly, the persistently greater anteversion of the pelvis probably affects the axial system. Overall, tripedal locomotion requires concerted kinematic adjustments of both the appendicular and axial systems, and consequently preventive, therapeutic and rehabilitative care of canine amputees should involve the whole musculoskeletal apparatus. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Evaluation of the topological characteristics of the turbulent flow in a `box of turbulence' through 2D time-resolved particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Lian, Huan; Soulopoulos, Nikolaos; Hardalupas, Yannis

    2017-09-01

    The experimental evaluation of the topological characteristics of the turbulent flow in a `box' of homogeneous and isotropic turbulence (HIT) with zero mean velocity is presented. This requires an initial evaluation of the effect of signal noise on measurement of velocity invariants. The joint probability distribution functions (pdfs) of experimentally evaluated, noise contaminated, velocity invariants have a different shape than the corresponding noise-free joint pdfs obtained from the DNS data of the Johns Hopkins University (JHU) open resource HIT database. A noise model, based on Gaussian and impulsive Salt and Pepper noise, is established and added artificially to the DNS velocity vector field of the JHU database. Digital filtering methods, based on Median and Wiener Filters, are chosen to eliminate the modeled noise source and their capacity to restore the joint pdfs of velocity invariants to that of the noise-free DNS data is examined. The remaining errors after filtering are quantified by evaluating the global mean velocity, turbulent kinetic energy and global turbulent homogeneity, assessed through the behavior of the ratio of the standard deviation of the velocity fluctuations in two directions, the energy spectrum of the velocity fluctuations and the eigenvalues of the rate-of-strain tensor. A method of data filtering, based on median filtered velocity using different median filter window size, is used to quantify the clustering of zero velocity points of the turbulent field using the radial distribution function (RDF) and Voronoï analysis to analyze the 2D time-resolved particle image velocimetry (TR-PIV) velocity measurements. It was found that a median filter with window size 3 × 3 vector spacing is the effective and efficient approach to eliminate the experimental noise from PIV measured velocity images to a satisfactory level and extract the statistical two-dimensional topological turbulent flow patterns.

  1. Distributed Joint Source-Channel Coding in Wireless Sensor Networks

    PubMed Central

    Zhu, Xuqi; Liu, Yu; Zhang, Lin

    2009-01-01

    Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560

  2. Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

    NASA Astrophysics Data System (ADS)

    Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

    2012-05-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

  3. Global Pyrogeography: the Current and Future Distribution of Wildfire

    PubMed Central

    Krawchuk, Meg A.; Moritz, Max A.; Parisien, Marc-André; Van Dorn, Jeff; Hayhoe, Katharine

    2009-01-01

    Climate change is expected to alter the geographic distribution of wildfire, a complex abiotic process that responds to a variety of spatial and environmental gradients. How future climate change may alter global wildfire activity, however, is still largely unknown. As a first step to quantifying potential change in global wildfire, we present a multivariate quantification of environmental drivers for the observed, current distribution of vegetation fires using statistical models of the relationship between fire activity and resources to burn, climate conditions, human influence, and lightning flash rates at a coarse spatiotemporal resolution (100 km, over one decade). We then demonstrate how these statistical models can be used to project future changes in global fire patterns, highlighting regional hotspots of change in fire probabilities under future climate conditions as simulated by a global climate model. Based on current conditions, our results illustrate how the availability of resources to burn and climate conditions conducive to combustion jointly determine why some parts of the world are fire-prone and others are fire-free. In contrast to any expectation that global warming should necessarily result in more fire, we find that regional increases in fire probabilities may be counter-balanced by decreases at other locations, due to the interplay of temperature and precipitation variables. Despite this net balance, our models predict substantial invasion and retreat of fire across large portions of the globe. These changes could have important effects on terrestrial ecosystems since alteration in fire activity may occur quite rapidly, generating ever more complex environmental challenges for species dispersing and adjusting to new climate conditions. Our findings highlight the potential for widespread impacts of climate change on wildfire, suggesting severely altered fire regimes and the need for more explicit inclusion of fire in research on global vegetation-climate change dynamics and conservation planning. PMID:19352494

  4. 2000 Worldwide Joint Lessons Learned Conference. Forging a Future Joint Lessons Learned System. (Joint Center for Lessons Learned Special Bulletin. Volume 3, Special Issue 1, January 2001)

    DTIC Science & Technology

    2001-01-01

    Management System (JTIMS) followed, and generated spirited discussion regarding the respective roles of JTIMS and the JLLP. The discussion concluded...waiting for the Director, Joint Staff�s signature and should be in official distribution by January 2001. An update on the Joint Training Information

  5. COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION FOR BANDED PROBABILITY DISTRIBUTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gjerløw, E.; Mikkelsen, K.; Eriksen, H. K.

    We investigate sets of random variables that can be arranged sequentially such that a given variable only depends conditionally on its immediate predecessor. For such sets, we show that the full joint probability distribution may be expressed exclusively in terms of uni- and bivariate marginals. Under the assumption that the cosmic microwave background (CMB) power spectrum likelihood only exhibits correlations within a banded multipole range, Δl{sub C}, we apply this expression to two outstanding problems in CMB likelihood analysis. First, we derive a statistically well-defined hybrid likelihood estimator, merging two independent (e.g., low- and high-l) likelihoods into a single expressionmore » that properly accounts for correlations between the two. Applying this expression to the Wilkinson Microwave Anisotropy Probe (WMAP) likelihood, we verify that the effect of correlations on cosmological parameters in the transition region is negligible in terms of cosmological parameters for WMAP; the largest relative shift seen for any parameter is 0.06σ. However, because this may not hold for other experimental setups (e.g., for different instrumental noise properties or analysis masks), but must rather be verified on a case-by-case basis, we recommend our new hybridization scheme for future experiments for statistical self-consistency reasons. Second, we use the same expression to improve the convergence rate of the Blackwell-Rao likelihood estimator, reducing the required number of Monte Carlo samples by several orders of magnitude, and thereby extend it to high-l applications.« less

  6. The Relation Between Reproductive Value and Genetic Contribution

    PubMed Central

    Barton, Nicholas H.; Etheridge, Alison M.

    2011-01-01

    What determines the genetic contribution that an individual makes to future generations? With biparental reproduction, each individual leaves a “pedigree” of descendants, determined by the biparental relationships in the population. The pedigree of an individual constrains the lines of descent of each of its genes. An individual’s reproductive value is the expected number of copies of each of its genes that is passed on to distant generations conditional on its pedigree. For the simplest model of biparental reproduction (analogous to the Wright–Fisher model), an individual’s reproductive value is determined within ∼10 generations, independent of population size. Partial selfing and subdivision do not greatly slow this convergence. Our central result is that the probability that a gene will survive is proportional to the reproductive value of the individual that carries it and that, conditional on survival, after a few tens of generations, the distribution of the number of surviving copies is the same for all individuals, whatever their reproductive value. These results can be generalized to the joint distribution of surviving blocks of the ancestral genome. Selection on unlinked loci in the genetic background may greatly increase the variance in reproductive value, but the above results nevertheless still hold. The almost linear relationship between survival probability and reproductive value also holds for weakly favored alleles. Thus, the influence of the complex pedigree of descendants on an individual’s genetic contribution to the population can be summarized through a single number: its reproductive value. PMID:21624999

  7. Probabilistic Relationships between Ground‐Motion Parameters and Modified Mercalli Intensity in California

    USGS Publications Warehouse

    Worden, C.B.; Wald, David J.; Rhoades, D.A.

    2012-01-01

    We use a database of approximately 200,000 modified Mercalli intensity (MMI) observations of California earthquakes collected from USGS "Did You Feel It?" (DYFI) reports, along with a comparable number of peak ground-motion amplitudes from California seismic networks, to develop probabilistic relationships between MMI and peak ground velocity (PGV), peak ground acceleration (PGA), and 0.3-s, 1-s, and 3-s 5% damped pseudospectral acceleration (PSA). After associating each ground-motion observation with an MMI computed from all the DYFI responses within 2 km of the observation, we derived a joint probability distribution between MMI and ground motion. We then derived reversible relationships between MMI and each ground-motion parameter by using a total least squares regression to fit a bilinear function to the median of the stacked probability distributions. Among the relationships, the fit to peak ground velocity has the smallest errors, though linear combinations of PGA and PGV give nominally better results. We also find that magnitude and distance terms reduce the overall residuals and are justifiable on an information theoretic basis. For intensities MMI≥5, our results are in close agreement with the relations of Wald, Quitoriano, Heaton, and Kanamori (1999); for lower intensities, our results fall midway between Wald, Quitoriano, Heaton, and Kanamori (1999) and those of Atkinson and Kaka (2007). The earthquakes in the study ranged in magnitude from 3.0 to 7.3, and the distances ranged from less than a kilometer to about 400 km from the source.

  8. Clinical effects of leg length discrepancy through ground and joint reaction force responses: A review

    NASA Astrophysics Data System (ADS)

    Zabri, S. W. K. Ali; Basaruddin, K. S.; Salleh, A. F.; Rusli, W. M. R.; Daud, R.

    2017-09-01

    Leg length discrepancy (LLD) is caused either due to functional disorder or shortening of bone structure. This disorder could contribute to the significant effects on body weight distribution and lumbar scoliosis at the certain extend. Ground reaction force and joint reaction force are the parameters that can be used to analyze the responses in weight distribution and kinetics changes on the body joints, respectively. Hence, the purpose of this paper is to review the studies that focus on the clinical effects of LLD to the lower limb and spine through ground and joint reaction force responses that could lead to the orthopedics disorder.

  9. Microstructural Aspects in FSW and TIG Welding of Cast ZE41A Magnesium Alloy

    NASA Astrophysics Data System (ADS)

    Carlone, Pierpaolo; Astarita, Antonello; Rubino, Felice; Pasquino, Nicola

    2016-04-01

    In this paper, magnesium ZE41A alloy plates were butt joined through friction stir welding (FSW) and Tungsten Inert Gas welding processes. Process-induced microstructures were investigated by optical and SEM observations, EDX microanalysis and microhardness measurements. The effect of a post-welded T5 heat treatment on FSW joints was also assessed. Sound joints were produced by means of both techniques. Different elemental distributions and grain sizes were found, whereas microhardness profiles reflect microstructural changes. Post-welding heat treatment did not induce significant alterations in elemental distribution. The FSW-treated joint showed a more homogeneous hardness profile than the as-welded FSW joint.

  10. A linear programming model for protein inference problem in shotgun proteomics.

    PubMed

    Huang, Ting; He, Zengyou

    2012-11-15

    Assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is an important issue in shotgun proteomics. The objective of protein inference is to find a subset of proteins that are truly present in the sample. Although many methods have been proposed for protein inference, several issues such as peptide degeneracy still remain unsolved. In this article, we present a linear programming model for protein inference. In this model, we use a transformation of the joint probability that each peptide/protein pair is present in the sample as the variable. Then, both the peptide probability and protein probability can be expressed as a formula in terms of the linear combination of these variables. Based on this simple fact, the protein inference problem is formulated as an optimization problem: minimize the number of proteins with non-zero probabilities under the constraint that the difference between the calculated peptide probability and the peptide probability generated from peptide identification algorithms should be less than some threshold. This model addresses the peptide degeneracy issue by forcing some joint probability variables involving degenerate peptides to be zero in a rigorous manner. The corresponding inference algorithm is named as ProteinLP. We test the performance of ProteinLP on six datasets. Experimental results show that our method is competitive with the state-of-the-art protein inference algorithms. The source code of our algorithm is available at: https://sourceforge.net/projects/prolp/. zyhe@dlut.edu.cn. Supplementary data are available at Bioinformatics Online.

  11. Snake River fall Chinook salmon life history investigations, annual report 2008

    USGS Publications Warehouse

    Tiffan, Kenneth F.; Connor, William P.; Bellgraph, Brian J.; Buchanan, Rebecca A.

    2010-01-01

    In 2009, we used radio and acoustic telemetry to evaluate the migratory behavior, survival, mortality, and delay of subyearling fall Chinook salmon in the Clearwater River and Lower Granite Reservoir. We released a total of 1,000 tagged hatchery subyearlings at Cherry Lane on the Clearwater River in mid August and we monitored them as they passed downstream through various river and reservoir reaches. Survival through the free-flowing river was high (>0.85) for both radio- and acoustic-tagged fish, but dropped substantially as fish delayed in the Transition Zone and Confluence areas. Estimates of the joint probability of migration and survival through the Transition Zone and Confluence reaches combined were similar for both radio- and acoustic-tagged fish, and ranged from about 0.30 to 0.35. Estimates of the joint probability of delaying and surviving in the combined Transition Zone and Confluence peaked at the beginning of the study, ranging from 0.323 ( SE =NA; radio-telemetry data) to 0.466 ( SE =0.024; acoustic-telemetry data), and then steadily declined throughout the remainder of the study. By the end of October, no live tagged juvenile salmon were detected in either the Transition Zone or the Confluence. As estimates of the probability of delay decreased throughout the study, estimates of the probability of mortality increased, as evidenced by the survival estimate of 0.650 ( SE =0.025) at the end of October (acoustic-telemetry data). Few fish were detected at Lower Granite Dam during our study and even fewer fish passed the dam before PIT-tag monitoring ended at the end of October. Five acoustic-tagged fish passed Lower Granite Dam in October and 12 passed the dam in November based on detections in the dam tailrace; however, too few detections were available to calculate the joint probabilities of migrating and surviving or delaying and surviving. Estimates of the joint probability of migrating and surviving through the reservoir was less than 0.2 based on acoustic-tagged fish. Migration rates of tagged fish were highest in the free-flowing river (median range = 36 to 43 km/d) but were generally less than 6 km/d in the reservoir reaches. In particular, median migration rates of radio-tagged fish through the Transition Zone and Confluence were 3.4 and 5.2 km/d, respectively. Median migration rate for acoustic-tagged fish though the Transition Zone and Confluence combined was 1 km/d.

  12. Distribution and Joint Fish-Tag Survival of Juvenile Chinook Salmon Migrating through the Sacramento-San Joaquin River Delta, California, 2008

    USGS Publications Warehouse

    Holbrook, Christopher M.; Perry, Russell W.; Adams, Noah S.

    2009-01-01

    Acoustic telemetry was used to obtain the movement histories of 915 juvenile fall-run Chinook salmon (Oncorhynchus tshawytscha) through the lower San Joaquin River and Sacramento-San Joaquin Delta, California, in 2008. Data were analyzed within a release-recapture framework to estimate survival, route distribution, and detection probabilities among three migration pathways through the Delta. The pathways included the primary route through the San Joaquin River and two less direct routes (Old River and Turner Cut). Strong inferences about survival were limited by premature tag failure, but estimates of fish distribution among migration routes should be unaffected by tag failure. Based on tag failure tests (N = 66 tags), we estimated that only 55-78 percent of the tags used in this study were still functioning when the last fish was detected exiting the study area 15 days after release. Due to premature tag failure, our 'survival' estimates represent the joint probability that both the tag and fish survived, not just survival of fish. Low estimates of fish-tag survival could have been caused by fish mortality or fish travel times that exceeded the life of the tag, but we were unable to differentiate between the two. Fish-tag survival through the Delta (from Durham Ferry to Chipps Island by all routes) ranged from 0.05 +or- 0.01 (SE) to 0.06 +or- 0.01 between the two weekly release groups. Among the three migration routes, fish that remained in the San Joaquin River exhibited the highest joint fish-tag survival (0.09 +or- 0.02) in both weeks, but only 22-33 percent of tagged fish used this route, depending on the week of release. Only 4-10 percent (depending on week) of tagged fish traveled through Turner Cut, but no tagged fish that used this route were detected exiting the Delta. Most fish (63-68 percent, depending on week of release) migrated through Old River, but fish-tag survival through this route (0.05 +or- 0.01) was only about one-half that of fish that remained in the San Joaquin River. Once tagged fish entered Old River, only fish collected at two large water conveyance projects and transported through the Delta by truck were detected exiting the Delta, suggesting that this route was the only successful migration pathway for fish that entered Old River. The rate of entrainment of tagged juvenile salmon into Old River was similar to the fraction of San Joaquin River discharge flowing into Old River, which averaged 63 percent but varied tidally and ranged from 33 to 100 percent daily. Although improvements in transmitter battery life are clearly needed, this information will help guide the development of future research and monitoring efforts in this system.

  13. 76 FR 60006 - Joint Europe Africa Deployment & Distribution Conference 2011: “Adapting To Challenge and Change”

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-28

    ... DEPARTMENT OF DEFENSE Office of the Secretary Joint Europe Africa Deployment & Distribution Conference 2011: ``Adapting To Challenge and Change'' AGENCY: United States Africa Command, Department of Defense (DoD). ACTION: Notice of conference. SUMMARY: This document announces that U.S. Africa Command...

  14. Experimental joint weak measurement on a photon pair as a probe of Hardy's paradox.

    PubMed

    Lundeen, J S; Steinberg, A M

    2009-01-16

    It has been proposed that the ability to perform joint weak measurements on postselected systems would allow us to study quantum paradoxes. These measurements can investigate the history of those particles that contribute to the paradoxical outcome. Here we experimentally perform weak measurements of joint (i.e., nonlocal) observables. In an implementation of Hardy's paradox, we weakly measure the locations of two photons, the subject of the conflicting statements behind the paradox. Remarkably, the resulting weak probabilities verify all of these statements but, at the same time, resolve the paradox.

  15. Context-invariant quasi hidden variable (qHV) modelling of all joint von Neumann measurements for an arbitrary Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loubenets, Elena R.

    We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less

  16. Probabilistic modelling of drought events in China via 2-dimensional joint copula

    NASA Astrophysics Data System (ADS)

    Ayantobo, Olusola O.; Li, Yi; Song, Songbai; Javed, Tehseen; Yao, Ning

    2018-04-01

    Probabilistic modelling of drought events is a significant aspect of water resources management and planning. In this study, popularly applied and several relatively new bivariate Archimedean copulas were employed to derive regional and spatial based copula models to appraise drought risk in mainland China over 1961-2013. Drought duration (Dd), severity (Ds), and peak (Dp), as indicated by Standardized Precipitation Evapotranspiration Index (SPEI), were extracted according to the run theory and fitted with suitable marginal distributions. The maximum likelihood estimation (MLE) and curve fitting method (CFM) were used to estimate the copula parameters of nineteen bivariate Archimedean copulas. Drought probabilities and return periods were analysed based on appropriate bivariate copula in sub-region I-VII and entire mainland China. The goodness-of-fit tests as indicated by the CFM showed that copula NN19 in sub-regions III, IV, V, VI and mainland China, NN20 in sub-region I and NN13 in sub-region VII are the best for modeling drought variables. Bivariate drought probability across mainland China is relatively high, and the highest drought probabilities are found mainly in the Northwestern and Southwestern China. Besides, the result also showed that different sub-regions might suffer varying drought risks. The drought risks as observed in Sub-region III, VI and VII, are significantly greater than other sub-regions. Higher probability of droughts of longer durations in the sub-regions also corresponds to shorter return periods with greater drought severity. These results may imply tremendous challenges for the water resources management in different sub-regions, particularly the Northwestern and Southwestern China.

  17. Design, fabrication and test of graphite/polyimide composite joints and attachments for advanced aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Skoumal, D. E.

    1980-01-01

    Bonded and bolted designs are presented for each of four major attachment types. Prepreg processing problems are discussed and quality control data are given for lots 2W4604, 2W4632 and 2W4643. Preliminary design allowables test results for tension tests and compression tests of laminates are included. The final small specimen test matrix is defined and the configuration of symmetric step-lap joint specimens are shown. Finite element modeling studies of a double lap joint were performed to evaluate the number of elements required through the adhesive thickness to assess effects of various joint parameters on stress distributions. Results of finite element analyses assessing the effect of an adhesive fillet on the stress distribution in a double lap joint are examined.

  18. Nonadditive entropies yield probability distributions with biases not warranted by the data.

    PubMed

    Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A

    2013-11-01

    Different quantities that go by the name of entropy are used in variational principles to infer probability distributions from limited data. Shore and Johnson showed that maximizing the Boltzmann-Gibbs form of the entropy ensures that probability distributions inferred satisfy the multiplication rule of probability for independent events in the absence of data coupling such events. Other types of entropies that violate the Shore and Johnson axioms, including nonadditive entropies such as the Tsallis entropy, violate this basic consistency requirement. Here we use the axiomatic framework of Shore and Johnson to show how such nonadditive entropy functions generate biases in probability distributions that are not warranted by the underlying data.

  19. Asymmetric biotic interactions and abiotic niche differences revealed by a dynamic joint species distribution model.

    PubMed

    Lany, Nina K; Zarnetske, Phoebe L; Schliep, Erin M; Schaeffer, Robert N; Orians, Colin M; Orwig, David A; Preisser, Evan L

    2018-05-01

    A species' distribution and abundance are determined by abiotic conditions and biotic interactions with other species in the community. Most species distribution models correlate the occurrence of a single species with environmental variables only, and leave out biotic interactions. To test the importance of biotic interactions on occurrence and abundance, we compared a multivariate spatiotemporal model of the joint abundance of two invasive insects that share a host plant, hemlock woolly adelgid (HWA; Adelges tsugae) and elongate hemlock scale (EHS; Fiorina externa), to independent models that do not account for dependence among co-occurring species. The joint model revealed that HWA responded more strongly to abiotic conditions than EHS. Additionally, HWA appeared to predispose stands to subsequent increase of EHS, but HWA abundance was not strongly dependent on EHS abundance. This study demonstrates how incorporating spatial and temporal dependence into a species distribution model can reveal the dependence of a species' abundance on other species in the community. Accounting for dependence among co-occurring species with a joint distribution model can also improve estimation of the abiotic niche for species affected by interspecific interactions. © 2018 by the Ecological Society of America.

  20. The association between general practitioner participation in joint teleconsultations and rates of referral: a discrete choice experiment.

    PubMed

    Cravo Oliveira, Tiago; Barlow, James; Bayer, Steffen

    2015-04-21

    Joint consultations - such as teleconsultations - provide opportunities for continuing education of general practitioners (GPs). It has been reported this form of interactive case-based learning may lead to fewer GP referrals, yet these studies have relied on expert opinion and simple frequencies, without accounting for other factors known to influence referrals. We use a survey-based discrete choice experiment of GPs' referral preferences to estimate how referral rates are associated with participation in joint teleconsultations, explicitly controlling for a number of potentially confounding variables. We distributed questionnaires at two meetings of the Portuguese Association of General Practice. GPs were presented with descriptions of patients with dermatological lesions and asked whether they would refer based on the waiting time, the distance to appointment, and pressure from patients for a referral. We analysed GPs' responses to multiple combinations of these factors, coupled with information on GP and practice characteristics, using a binary logit model. We estimated the probabilities of referral of different lesions using marginal effects. Questionnaires were returned by 44 GPs, giving a total of 721 referral choices. The average referral rate for the 11 GPs (25%) who had participated in teleconsultations was 68.1% (range 53-88%), compared to 74.4% (range 47-100%) for the remaining physicians. Participation in teleconsultations was associated with reductions in the probabilities of referral of 17.6% for patients presenting with keratosis (p = 0.02), 42.3% for psoriasis (p < 0.001), 8.4% for melanoma (p = 0.14), and 5.4% for naevus (p = 0.19). The results indicate that GP participation in teleconsultations is associated with overall reductions in referral rates and in variation across GPs, and that these effects are robust to the inclusion of other factors known to influence referrals. The reduction in range, coupled with different effects for different clinical presentations, may suggest an educational effect. However, more research is needed to establish whether there are causal relationships between participation in teleconsultations, continuing education, and referral rates.

  1. ProbOnto: ontology and knowledge base of probability distributions.

    PubMed

    Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala

    2016-09-01

    Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  2. Hereditary hemochromatosis is characterized by a clinically definable arthropathy that correlates with iron load.

    PubMed

    Carroll, G J; Breidahl, W H; Bulsara, M K; Olynyk, J K

    2011-01-01

    To determine the frequency and character of arthropathy in hereditary hemochromatosis (HH) and to investigate the relationship between this arthropathy, nodal interphalangeal osteoarthritis, and iron load. Participants were recruited from the community by newspaper advertisement and assigned to diagnostic confidence categories for HH (definite/probable or possible/unlikely). Arthropathy was determined by use of a predetermined clinical protocol, radiographs of the hands of all participants, and radiographs of other joints in which clinical criteria were met. An arthropathy considered typical for HH, involving metacarpophalangeal joints 2-5 and bilateral specified large joints, was observed in 10 of 41 patients with definite or probable HH (24%), all of whom were homozygous for the C282Y mutation in the HFE gene, while only 2 of 62 patients with possible/unlikely HH had such an arthropathy (P=0.0024). Arthropathy in definite/probable HH was more common with increasing age and was associated with ferritin concentrations>1,000 μg/liter at the time of diagnosis (odds ratio 14.0 [95% confidence interval 1.30-150.89], P=0.03). A trend toward more episodes requiring phlebotomy was also observed among those with arthropathy, but this was not statistically significant (odds ratio 1.03 [95% confidence interval 0.99-1.06], P=0.097). There was no significant association between arthropathy in definite/probable HH and a history of intensive physical labor (P=0.12). An arthropathy consistent with that commonly attributed to HH was found to occur in 24% of patients with definite/probable HH. The association observed between this arthropathy, homozygosity for C282Y, and serum ferritin concentrations at the time of diagnosis suggests that iron load is likely to be a major determinant of arthropathy in HH and to be more important than occupational factors. Copyright © 2011 by the American College of Rheumatology.

  3. PRODIGEN: visualizing the probability landscape of stochastic gene regulatory networks in state and time space.

    PubMed

    Ma, Chihua; Luciani, Timothy; Terebus, Anna; Liang, Jie; Marai, G Elisabeta

    2017-02-15

    Visualizing the complex probability landscape of stochastic gene regulatory networks can further biologists' understanding of phenotypic behavior associated with specific genes. We present PRODIGEN (PRObability DIstribution of GEne Networks), a web-based visual analysis tool for the systematic exploration of probability distributions over simulation time and state space in such networks. PRODIGEN was designed in collaboration with bioinformaticians who research stochastic gene networks. The analysis tool combines in a novel way existing, expanded, and new visual encodings to capture the time-varying characteristics of probability distributions: spaghetti plots over one dimensional projection, heatmaps of distributions over 2D projections, enhanced with overlaid time curves to display temporal changes, and novel individual glyphs of state information corresponding to particular peaks. We demonstrate the effectiveness of the tool through two case studies on the computed probabilistic landscape of a gene regulatory network and of a toggle-switch network. Domain expert feedback indicates that our visual approach can help biologists: 1) visualize probabilities of stable states, 2) explore the temporal probability distributions, and 3) discover small peaks in the probability landscape that have potential relation to specific diseases.

  4. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less

  5. Characterization of friction stir welded joint of low nickel austenitic stainless steel and modified ferritic stainless steel

    NASA Astrophysics Data System (ADS)

    Mondal, Mounarik; Das, Hrishikesh; Ahn, Eun Yeong; Hong, Sung Tae; Kim, Moon-Jo; Han, Heung Nam; Pal, Tapan Kumar

    2017-09-01

    Friction stir welding (FSW) of dissimilar stainless steels, low nickel austenitic stainless steel and 409M ferritic stainless steel, is experimentally investigated. Process responses during FSW and the microstructures of the resultant dissimilar joints are evaluated. Material flow in the stir zone is investigated in detail by elemental mapping. Elemental mapping of the dissimilar joints clearly indicates that the material flow pattern during FSW depends on the process parameter combination. Dynamic recrystallization and recovery are also observed in the dissimilar joints. Among the two different stainless steels selected in the present study, the ferritic stainless steels shows more severe dynamic recrystallization, resulting in a very fine microstructure, probably due to the higher stacking fault energy.

  6. Off-Grid Direction of Arrival Estimation Based on Joint Spatial Sparsity for Distributed Sparse Linear Arrays

    PubMed Central

    Liang, Yujie; Ying, Rendong; Lu, Zhenqi; Liu, Peilin

    2014-01-01

    In the design phase of sensor arrays during array signal processing, the estimation performance and system cost are largely determined by array aperture size. In this article, we address the problem of joint direction-of-arrival (DOA) estimation with distributed sparse linear arrays (SLAs) and propose an off-grid synchronous approach based on distributed compressed sensing to obtain larger array aperture. We focus on the complex source distribution in the practical applications and classify the sources into common and innovation parts according to whether a signal of source can impinge on all the SLAs or a specific one. For each SLA, we construct a corresponding virtual uniform linear array (ULA) to create the relationship of random linear map between the signals respectively observed by these two arrays. The signal ensembles including the common/innovation sources for different SLAs are abstracted as a joint spatial sparsity model. And we use the minimization of concatenated atomic norm via semidefinite programming to solve the problem of joint DOA estimation. Joint calculation of the signals observed by all the SLAs exploits their redundancy caused by the common sources and decreases the requirement of array size. The numerical results illustrate the advantages of the proposed approach. PMID:25420150

  7. Statistical Evaluation and Improvement of Methods for Combining Random and Harmonic Loads

    NASA Technical Reports Server (NTRS)

    Brown, A. M.; McGhee, D. S.

    2003-01-01

    Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield/ultimate strength and high- cycle fatigue capability. This Technical Publication examines the cumulative distribution percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematica to calculate the combined value corresponding to any desired percentile is then presented along with a curve tit to this value. Another Excel macro that calculates the combination using Monte Carlo simulation is shown. Unlike the traditional techniques. these methods quantify the calculated load value with a consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Additionally, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can substantially lower the design loading without losing any of the identified structural reliability.

  8. Statistical Comparison and Improvement of Methods for Combining Random and Harmonic Loads

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; McGhee, David S.

    2004-01-01

    Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield ultimate strength and high cycle fatigue capability. This paper examines the cumulative distribution function (CDF) percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematics is then used to calculate the combined value corresponding to any desired percentile along with a curve fit to this value. Another Excel macro is used to calculate the combination using a Monte Carlo simulation. Unlike the traditional techniques, these methods quantify the calculated load value with a Consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Also, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can lower the design loading substantially without losing any of the identified structural reliability.

  9. Inverse-Probability-Weighted Estimation for Monotone and Nonmonotone Missing Data.

    PubMed

    Sun, BaoLuo; Perkins, Neil J; Cole, Stephen R; Harel, Ofer; Mitchell, Emily M; Schisterman, Enrique F; Tchetgen Tchetgen, Eric J

    2018-03-01

    Missing data is a common occurrence in epidemiologic research. In this paper, 3 data sets with induced missing values from the Collaborative Perinatal Project, a multisite US study conducted from 1959 to 1974, are provided as examples of prototypical epidemiologic studies with missing data. Our goal was to estimate the association of maternal smoking behavior with spontaneous abortion while adjusting for numerous confounders. At the same time, we did not necessarily wish to evaluate the joint distribution among potentially unobserved covariates, which is seldom the subject of substantive scientific interest. The inverse probability weighting (IPW) approach preserves the semiparametric structure of the underlying model of substantive interest and clearly separates the model of substantive interest from the model used to account for the missing data. However, IPW often will not result in valid inference if the missing-data pattern is nonmonotone, even if the data are missing at random. We describe a recently proposed approach to modeling nonmonotone missing-data mechanisms under missingness at random to use in constructing the weights in IPW complete-case estimation, and we illustrate the approach using 3 data sets described in a companion article (Am J Epidemiol. 2018;187(3):568-575).

  10. Inverse-Probability-Weighted Estimation for Monotone and Nonmonotone Missing Data

    PubMed Central

    Sun, BaoLuo; Perkins, Neil J; Cole, Stephen R; Harel, Ofer; Mitchell, Emily M; Schisterman, Enrique F; Tchetgen Tchetgen, Eric J

    2018-01-01

    Abstract Missing data is a common occurrence in epidemiologic research. In this paper, 3 data sets with induced missing values from the Collaborative Perinatal Project, a multisite US study conducted from 1959 to 1974, are provided as examples of prototypical epidemiologic studies with missing data. Our goal was to estimate the association of maternal smoking behavior with spontaneous abortion while adjusting for numerous confounders. At the same time, we did not necessarily wish to evaluate the joint distribution among potentially unobserved covariates, which is seldom the subject of substantive scientific interest. The inverse probability weighting (IPW) approach preserves the semiparametric structure of the underlying model of substantive interest and clearly separates the model of substantive interest from the model used to account for the missing data. However, IPW often will not result in valid inference if the missing-data pattern is nonmonotone, even if the data are missing at random. We describe a recently proposed approach to modeling nonmonotone missing-data mechanisms under missingness at random to use in constructing the weights in IPW complete-case estimation, and we illustrate the approach using 3 data sets described in a companion article (Am J Epidemiol. 2018;187(3):568–575). PMID:29165557

  11. Modeling of waiting times and price changes in currency exchange data

    NASA Astrophysics Data System (ADS)

    Repetowicz, Przemysław; Richmond, Peter

    2004-11-01

    A theory which describes the share price evolution at financial markets as a continuous-time random walk (Physica A 287 (2000) 468, Physica A 314 (2002) 749, Eur. Phys. J. B 27 (2002) 273, Physica A 376 (2000) 284) has been generalized in order to take into account the dependence of waiting times t on price returns x. A joint probability density function (pdf) φ(x,t) which uses the concept of a Lévy stable distribution is worked out. The theory is fitted to high-frequency US $/Japanese Yen exchange rate and low-frequency 19th century Irish stock data. The theory has been fitted both to price return and to waiting time data and the adherence to data, in terms of the χ2 test statistic, has been improved when compared to the old theory.

  12. Stochastic Inversion of 2D Magnetotelluric Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong

    2010-07-01

    The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, itmore » provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less

  13. Verification of operational solar flare forecast: Case of Regional Warning Center Japan

    NASA Astrophysics Data System (ADS)

    Kubo, Yûki; Den, Mitsue; Ishii, Mamoru

    2017-08-01

    In this article, we discuss a verification study of an operational solar flare forecast in the Regional Warning Center (RWC) Japan. The RWC Japan has been issuing four-categorical deterministic solar flare forecasts for a long time. In this forecast verification study, we used solar flare forecast data accumulated over 16 years (from 2000 to 2015). We compiled the forecast data together with solar flare data obtained with the Geostationary Operational Environmental Satellites (GOES). Using the compiled data sets, we estimated some conventional scalar verification measures with 95% confidence intervals. We also estimated a multi-categorical scalar verification measure. These scalar verification measures were compared with those obtained by the persistence method and recurrence method. As solar activity varied during the 16 years, we also applied verification analyses to four subsets of forecast-observation pair data with different solar activity levels. We cannot conclude definitely that there are significant performance differences between the forecasts of RWC Japan and the persistence method, although a slightly significant difference is found for some event definitions. We propose to use a scalar verification measure to assess the judgment skill of the operational solar flare forecast. Finally, we propose a verification strategy for deterministic operational solar flare forecasting. For dichotomous forecast, a set of proposed verification measures is a frequency bias for bias, proportion correct and critical success index for accuracy, probability of detection for discrimination, false alarm ratio for reliability, Peirce skill score for forecast skill, and symmetric extremal dependence index for association. For multi-categorical forecast, we propose a set of verification measures as marginal distributions of forecast and observation for bias, proportion correct for accuracy, correlation coefficient and joint probability distribution for association, the likelihood distribution for discrimination, the calibration distribution for reliability and resolution, and the Gandin-Murphy-Gerrity score and judgment skill score for skill.

  14. Relaxation times and modes of disturbed aggregate distribution in micellar solutions with fusion and fission of micelles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakharov, Anatoly I.; Adzhemyan, Loran Ts.; Shchekin, Alexander K., E-mail: akshch@list.ru

    2015-09-28

    We have performed direct numerical calculations of the kinetics of relaxation in the system of surfactant spherical micelles under joint action of the molecular mechanism with capture and emission of individual surfactant molecules by molecular aggregates and the mechanism of fusion and fission of the aggregates. As a basis, we have taken the difference equations of aggregation and fragmentation in the form of the generalized kinetic Smoluchowski equations for aggregate concentrations. The calculations have been made with using the droplet model of molecular surfactant aggregates and two modified Smoluchowski models for the coefficients of aggregate-monomer and aggregate-aggregate fusions which takemore » into account the effects of the aggregate size and presence of hydrophobic spots on the aggregate surface. A full set of relaxation times and corresponding relaxation modes for nonequilibrium aggregate distribution in the aggregation number has been found. The dependencies of these relaxation times and modes on the total concentration of surfactant in the solution and the special parameter controlling the probability of fusion in collisions of micelles with other micelles have been studied.« less

  15. The feasibility of utilizing remotely sensed data to assess and monitor oceanic gamefish

    NASA Technical Reports Server (NTRS)

    Savastano, K. J.; Leming, T. D.

    1975-01-01

    An investigation was conducted to establish the feasibility of utilizing remotely sensed data acquired from aircraft and satellite platforms to provide information concerning the distribution and abundance of oceanic gamefish. The data from the test area was jointly acquired by NASA, the Navy, the Air Force and NOAA/NMFS elements and private and professional fishermen in the northeastern Gulf of Mexico. The data collected has made it possible to identify fisheries significant environmental parameters for white marlin. Prediction models, based on catch data and surface truth information, were developed and demonstrated a potential for significantly reducing search by identifying areas that have a high probability of productivity. Three of the parameters utilized by the models, chlorophyll-a, sea surface temperature, and turbidity were inferred from aircraft sensor data and were tested. Effective use of Skylab data was inhibited by cloud cover and delayed delivery. Initial efforts toward establishing the feasibility of utilizing remotely sensed data to assess and monitor the distribution of oceanic gamefish has successfully identified fisheries significant oceanographic parameters and demonstrated the capability of remotely measuring most of the parameters.

  16. Self-referenced processing, neurodevelopment and joint attention in autism.

    PubMed

    Mundy, Peter; Gwaltney, Mary; Henderson, Heather

    2010-09-01

    This article describes a parallel and distributed processing model (PDPM) of joint attention, self-referenced processing and autism. According to this model, autism involves early impairments in the capacity for rapid, integrated processing of self-referenced (proprioceptive and interoceptive) and other-referenced (exteroceptive) information. Measures of joint attention have proven useful in research on autism because they are sensitive to the early development of the 'parallel' and integrated processing of self- and other-referenced stimuli. Moreover, joint attention behaviors are a consequence, but also an organizer of the functional development of a distal distributed cortical system involving anterior networks including the prefrontal and insula cortices, as well as posterior neural networks including the temporal and parietal cortices. Measures of joint attention provide early behavioral indicators of atypical development in this parallel and distributed processing system in autism. In addition it is proposed that an early, chronic disturbance in the capacity for integrating self- and other-referenced information may have cascading effects on the development of self awareness in autism. The assumptions, empirical support and future research implications of this model are discussed.

  17. Incorporating Skew into RMS Surface Roughness Probability Distribution

    NASA Technical Reports Server (NTRS)

    Stahl, Mark T.; Stahl, H. Philip.

    2013-01-01

    The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.

  18. Bayesian inference for the distribution of grams of marijuana in a joint.

    PubMed

    Ridgeway, Greg; Kilmer, Beau

    2016-08-01

    The average amount of marijuana in a joint is unknown, yet this figure is a critical quantity for creating credible measures of marijuana consumption. It is essential for projecting tax revenues post-legalization, estimating the size of illicit marijuana markets, and learning about how much marijuana users are consuming in order to understand health and behavioral consequences. Arrestee Drug Abuse Monitoring data collected between 2000 and 2010 contain relevant information on 10,628 marijuana transactions, joints and loose marijuana purchases, including the city in which the purchase occurred and the price paid for the marijuana. Using the Brown-Silverman drug pricing model to link marijuana price and weight, we are able to infer the distribution of grams of marijuana in a joint and provide a Bayesian posterior distribution for the mean weight of marijuana in a joint. We estimate that the mean weight of marijuana in a joint is 0.32g (95% Bayesian posterior interval: 0.30-0.35). Our estimate of the mean weight of marijuana in a joint is lower than figures commonly used to make estimates of marijuana consumption. These estimates can be incorporated into drug policy discussions to produce better understanding about illicit marijuana markets, the size of potential legalized marijuana markets, and health and behavior outcomes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Joint reconstruction of multiview compressed images.

    PubMed

    Thirumalai, Vijayaraghavan; Frossard, Pascal

    2013-05-01

    Distributed representation of correlated multiview images is an important problem that arises in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed images are decoded together in order to take benefit from the image correlation. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG) with a balanced rate distribution among different cameras. A central decoder first estimates the inter-view image correlation from the independently compressed data. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images, which comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be as close as possible to their compressed versions. We show through experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our algorithm compares advantageously to state-of-the-art distributed coding schemes based on motion learning and on the DISCOVER algorithm.

  20. Approaches to quantifying long-term continental shelf sediment transport with an example from the Northern California STRESS mid-shelf site

    NASA Astrophysics Data System (ADS)

    Harris, Courtney K.; Wiberg, Patricia L.

    1997-09-01

    Modeling shelf sediment transport rates and bed reworking depths is problematic when the wave and current forcing conditions are not precisely known, as is usually the case when long-term sedimentation patterns are of interest. Two approaches to modeling sediment transport under such circumstances are considered. The first relies on measured or simulated time series of flow conditions to drive model calculations. The second approach uses as model input probability distribution functions of bottom boundary layer flow conditions developed from wave and current measurements. Sediment transport rates, frequency of bed resuspension by waves and currents, and bed reworking calculated using the two methods are compared at the mid-shelf STRESS (Sediment TRansport on Shelves and Slopes) site on the northern California continental shelf. Current, wave and resuspension measurements at the site are used to generate model inputs and test model results. An 11-year record of bottom wave orbital velocity, calculated from surface wave spectra measured by the National Data Buoy Center (NDBC) Buoy 46013 and verified against bottom tripod measurements, is used to characterize the frequency and duration of wave-driven transport events and to estimate the joint probability distribution of wave orbital velocity and period. A 109-day record of hourly current measurements 10 m above bottom is used to estimate the probability distribution of bottom boundary layer current velocity at this site and to develop an auto-regressive model to simulate current velocities for times when direct measurements of currents are not available. Frequency of transport, the maximum volume of suspended sediment, and average flux calculated using measured wave and simulated current time series agree well with values calculated using measured time series. A probabilistic approach is more amenable to calculations over time scales longer than existing wave records, but it tends to underestimate net transport because it does not capture the episodic nature of transport events. Both methods enable estimates to be made of the uncertainty in transport quantities that arise from an incomplete knowledge of the specific timing of wave and current conditions. 1997 Elsevier Science Ltd

  1. The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions

    PubMed Central

    Larget, Bret

    2013-01-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066

  2. Manipulated Changes in Limb Mass and Rotational Inertia in Trotting Dogs (Canis lupus familiaris) and Their Effect on Limb Kinematics.

    PubMed

    Kilbourne, Brandon M; Carrier, David R

    2016-12-01

    While the mass distribution of limbs is known to influence the metabolic energy consumed during locomotion, it remains unknown how the mass distribution of limbs may influence overall limb kinematics and whether the influence of limb mass distribution on limb kinematics differs between fore- and hindlimbs. To examine limb mass distribution's influence upon fore- and hindlimb kinematics, temporal stride parameters and swing phase joint kinematics were recorded from four dogs trotting on a treadmill with 0.5% and 1.0% body mass added to each limb, forelimbs alone, and hindlimbs alone, as well as with no added mass. Under all loading conditions, stride period did not differ between fore- and hindlimbs; however, forelimbs exhibited greater duty factors and stance durations, whereas hindlimbs exhibited greater swing durations, which may be related to the hindlimb's greater mass. Changes in forelimb joint and hip range of motion (RoM), flexion, and extension were subject to a high amount of kinematic plasticity among dogs. In contrast, for the knee and ankle, distally loading all four limbs or hindlimbs alone substantially increased joint RoM and flexion. Increased flexion of the knee and ankle has the potential to reduce the hindlimb's rotational inertia during swing phase. The differing response of fore- and hindlimbs with regard to joint kinematics is likely due to differences in their mass and mass distribution and differences in the physiological traits of fore- and hindlimb protractors and joint flexors. © 2017 Wiley Periodicals, Inc.

  3. Joint Online Thesis and Research System (JOTARS)

    DTIC Science & Technology

    2006-09-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited JOINT ONLINE ...September 2006 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE: Joint Online Thesis and Research System (JOTARS) 6. AUTHOR...prototype website is the Joint Online Thesis and Research System (JOTARS). The specific functional objectives of JOTARS are to establish standard

  4. Vertical changes in the probability distribution of downward irradiance within the near-surface ocean under sunny conditions

    NASA Astrophysics Data System (ADS)

    Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw

    2011-07-01

    Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.

  5. Three-Dimensional Structural and Hydrologic Evolution of Sant Corneli Anticline, a Fault-Cored Fold in the Central Spanish Pyrenees

    NASA Astrophysics Data System (ADS)

    Shackleton, J. R.; Cooke, M. L.

    2005-12-01

    The Sant Corneli Anticline is a well-exposed example of a fault-cored fold whose hydrologic evolution and structural development are directly linked. The E-W striking anticline is ~ 5 km wide with abrupt westerly plunge, and formed in response to thrusting associated with the upper Cretaceous to Miocene collision of Iberia with Europe. The fold's core of fractured carbonates contains a variety of west dipping normal faults with meter to decameter scale displacement and abundant calcite fill. This carbonate unit is capped by a marl unit with low angle, calcite filled normal faults. The marl unit is overlain by clastic syn-tectonic strata whose sedimentary architecture records limb rotation during the evolution of the fold. The syn-tectonic strata contain a variety of joint sets that record the stresses before, during, and possibly after fold growth. Faulting in the marl and calcite-filled joints in the syn-tectonic strata suggest that normal faults within the carbonate core of the fold eventually breached the overlying marl unit. This breach may have connected the joints of the syn-tectonic strata to the underlying carbonate reservoir and eliminated previous compartmentalization of fluids. Furthermore, breaching of the marl units probably enhanced joint formation in the overlying syn-tectonic strata. Future geochemical studies of calcite compositions in the three units will address this hypothesis. Preliminary mapping of joint sets in the syn-tectonic strata reveal a multistage history of jointing. Early bed-perpendicular joints healed by calcite strike NE-SW, parallel to normal faults in the underlying carbonates, and may be related to an early regional extensional event. Younger healed bed-perpendicular joints cross cut the NE-SW striking set, and are closer to N-S in strike: these joints are interpreted to represent the initial stages of folding. Decameter scale, bed perpendicular, unfilled fractures that are sub-parallel to strike probably represent small joints and faults that formed in response to outer arc extension during folding. Many filled, late stage joints strike sub-parallel to, and increase in frequency near, normal faults and transverse structures observed in the carbonate fold core. This suggests that faulting in the underlying carbonates and marls significantly affected the joint patterns in the syn-tectonic strata. Preliminary three-dimensional finite element restorations using Dynel have allowed us to test our hypotheses and constrain the timing of jointing and marl breach.

  6. Simultaneous dense coding affected by fluctuating massless scalar field

    NASA Astrophysics Data System (ADS)

    Huang, Zhiming; Ye, Yiyong; Luo, Darong

    2018-04-01

    In this paper, we investigate the simultaneous dense coding (SDC) protocol affected by fluctuating massless scalar field. The noisy model of SDC protocol is constructed and the master equation that governs the SDC evolution is deduced. The success probabilities of SDC protocol are discussed for different locking operators under the influence of vacuum fluctuations. We find that the joint success probability is independent of the locking operators, but other success probabilities are not. For quantum Fourier transform and double controlled-NOT operators, the success probabilities drop with increasing two-atom distance, but SWAP operator is not. Unlike the SWAP operator, the success probabilities of Bob and Charlie are different. For different noisy interval values, different locking operators have different robustness to noise.

  7. Predicting the probability of slip in gait: methodology and distribution study.

    PubMed

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  8. Integrated-Circuit Pseudorandom-Number Generator

    NASA Technical Reports Server (NTRS)

    Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur

    1992-01-01

    Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.

  9. A stochastic storm surge generator for the German North Sea and the multivariate statistical assessment of the simulation results

    NASA Astrophysics Data System (ADS)

    Wahl, Thomas; Jensen, Jürgen; Mudersbach, Christoph

    2010-05-01

    Storm surges along the German North Sea coastline led to major damages in the past and the risk of inundation is expected to increase in the course of an ongoing climate change. The knowledge of the characteristics of possible storm surges is essential for the performance of integrated risk analyses, e.g. based on the source-pathway-receptor concept. The latter includes the storm surge simulation/analyses (source), modelling of dike/dune breach scenarios (pathway) and the quantification of potential losses (receptor). In subproject 1b of the German joint research project XtremRisK (www.xtremrisk.de), a stochastic storm surge generator for the south-eastern North Sea area is developed. The input data for the multivariate model are high resolution sea level observations from tide gauges during extreme events. Based on 25 parameters (19 sea level parameters and 6 time parameters) observed storm surge hydrographs consisting of three tides are parameterised. Followed by the adaption of common parametric probability distributions and a large number of Monte-Carlo-Simulations, the final reconstruction leads to a set of 100.000 (default) synthetic storm surge events with a one-minute resolution. Such a data set can potentially serve as the basis for a large number of applications. For risk analyses, storm surges with peak water levels exceeding the design water levels are of special interest. The occurrence probabilities of the simulated extreme events are estimated based on multivariate statistics, considering the parameters "peak water level" and "fullness/intensity". In the past, most studies considered only the peak water levels during extreme events, which might not be the most important parameter in any cases. Here, a 2D-Archimedian copula model is used for the estimation of the joint probabilities of the selected parameters, accounting for the structures of dependence overlooking the margins. In coordination with subproject 1a, the results will be used as the input for the XtremRisK subprojects 2 to 4. The project is funded by the German Federal Ministry of Education and Research (BMBF) (Project No. 03 F 0483 B).

  10. Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2009-01-01

    Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.

  11. [Conditional probability analysis between tinnitus and comorbidities in patients attending the National Rehabilitation Institute-LGII in the period 2012-2013].

    PubMed

    Gómez Toledo, Verónica; Gutiérrez Farfán, Ileana; Verduzco-Mendoza, Antonio; Arch-Tirado, Emilio

    Tinnitus is defined as the conscious perception of a sensation of sound that occurs in the absence of an external stimulus. This audiological symptom affects 7% to 19% of the adult population. The aim of this study is to describe the associated comorbidities present in patients with tinnitus usingjoint and conditional probability analysis. Patients of both genders, diagnosed with unilateral or bilateral tinnitus, aged between 20 and 45 years, and had a full computerised medical record, were selected. Study groups were formed on the basis of the following clinical aspects: 1) audiological findings; 2) vestibular findings; 3) comorbidities such as, temporomandibular dysfunction, tubal dysfunction, otosclerosis and, 4) triggering factors of tinnitus noise exposure, respiratory tract infection, use of ototoxic and/or drugs. Of the patients with tinnitus, 27 (65%) reported hearing loss, 11 (26.19%) temporomandibular dysfunction, and 11 (26.19%) with vestibular disorders. When performing the joint probability analysis, it was found that the probability that a patient with tinnitus having hearing loss was 2742 0.65, and 2042 0.47 for bilateral type. The result for P (A ∩ B)=30%. Bayes' theorem P (AiB) = P(Ai∩B)P(B) was used, and various probabilities were calculated. Therefore, in patients with temporomandibulardysfunction and vestibular disorders, a posterior probability of P (Aі/B)=31.44% was calculated. Consideration should be given to the joint and conditional probability approach as tools for the study of different pathologies. Copyright © 2016 Academia Mexicana de Cirugía A.C. Publicado por Masson Doyma México S.A. All rights reserved.

  12. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.

    In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less

  14. Proximal Versus Distal Control of Two-Joint Planar Reaching Movements in the Presence of Neuromuscular Noise

    PubMed Central

    Nguyen, Hung P.; Dingwell, Jonathan B.

    2012-01-01

    Determining how the human nervous system contends with neuro-motor noise is vital to understanding how humans achieve accurate goal-directed movements. Experimentally, people learning skilled tasks tend to reduce variability in distal joint movements more than in proximal joint movements. This suggests that they might be imposing greater control over distal joints than proximal joints. However, the reasons for this remain unclear, largely because it is not experimentally possible to directly manipulate either the noise or the control at each joint independently. Therefore, this study used a 2 degree-of-freedom torque driven arm model to determine how different combinations of noise and/or control independently applied at each joint affected the reaching accuracy and the total work required to make the movement. Signal-dependent noise was simultaneously and independently added to the shoulder and elbow torques to induce endpoint errors during planar reaching. Feedback control was then applied, independently and jointly, at each joint to reduce endpoint error due to the added neuromuscular noise. Movement direction and the inertia distribution along the arm were varied to quantify how these biomechanical variations affected the system performance. Endpoint error and total net work were computed as dependent measures. When each joint was independently subjected to noise in the absence of control, endpoint errors were more sensitive to distal (elbow) noise than to proximal (shoulder) noise for nearly all combinations of reaching direction and inertia ratio. The effects of distal noise on endpoint errors were more pronounced when inertia was distributed more toward the forearm. In contrast, the total net work decreased as mass was shifted to the upper arm for reaching movements in all directions. When noise was present at both joints and joint control was implemented, controlling the distal joint alone reduced endpoint errors more than controlling the proximal joint alone for nearly all combinations of reaching direction and inertia ratio. Applying control only at the distal joint was more effective at reducing endpoint errors when more of the mass was more proximally distributed. Likewise, controlling the distal joint alone required less total net work than controlling the proximal joint alone for nearly all combinations of reaching distance and inertia ratio. It is more efficient to reduce endpoint error and energetic cost by selectively applying control to reduce variability in the distal joint than the proximal joint. The reasons for this arise from the biomechanical configuration of the arm itself. PMID:22757504

  15. Proximal versus distal control of two-joint planar reaching movements in the presence of neuromuscular noise.

    PubMed

    Nguyen, Hung P; Dingwell, Jonathan B

    2012-06-01

    Determining how the human nervous system contends with neuro-motor noise is vital to understanding how humans achieve accurate goal-directed movements. Experimentally, people learning skilled tasks tend to reduce variability in distal joint movements more than in proximal joint movements. This suggests that they might be imposing greater control over distal joints than proximal joints. However, the reasons for this remain unclear, largely because it is not experimentally possible to directly manipulate either the noise or the control at each joint independently. Therefore, this study used a 2 degree-of-freedom torque driven arm model to determine how different combinations of noise and/or control independently applied at each joint affected the reaching accuracy and the total work required to make the movement. Signal-dependent noise was simultaneously and independently added to the shoulder and elbow torques to induce endpoint errors during planar reaching. Feedback control was then applied, independently and jointly, at each joint to reduce endpoint error due to the added neuromuscular noise. Movement direction and the inertia distribution along the arm were varied to quantify how these biomechanical variations affected the system performance. Endpoint error and total net work were computed as dependent measures. When each joint was independently subjected to noise in the absence of control, endpoint errors were more sensitive to distal (elbow) noise than to proximal (shoulder) noise for nearly all combinations of reaching direction and inertia ratio. The effects of distal noise on endpoint errors were more pronounced when inertia was distributed more toward the forearm. In contrast, the total net work decreased as mass was shifted to the upper arm for reaching movements in all directions. When noise was present at both joints and joint control was implemented, controlling the distal joint alone reduced endpoint errors more than controlling the proximal joint alone for nearly all combinations of reaching direction and inertia ratio. Applying control only at the distal joint was more effective at reducing endpoint errors when more of the mass was more proximally distributed. Likewise, controlling the distal joint alone required less total net work than controlling the proximal joint alone for nearly all combinations of reaching distance and inertia ratio. It is more efficient to reduce endpoint error and energetic cost by selectively applying control to reduce variability in the distal joint than the proximal joint. The reasons for this arise from the biomechanical configuration of the arm itself.

  16. On joint subtree distributions under two evolutionary models.

    PubMed

    Wu, Taoyang; Choi, Kwok Pui

    2016-04-01

    In population and evolutionary biology, hypotheses about micro-evolutionary and macro-evolutionary processes are commonly tested by comparing the shape indices of empirical evolutionary trees with those predicted by neutral models. A key ingredient in this approach is the ability to compute and quantify distributions of various tree shape indices under random models of interest. As a step to meet this challenge, in this paper we investigate the joint distribution of cherries and pitchforks (that is, subtrees with two and three leaves) under two widely used null models: the Yule-Harding-Kingman (YHK) model and the proportional to distinguishable arrangements (PDA) model. Based on two novel recursive formulae, we propose a dynamic approach to numerically compute the exact joint distribution (and hence the marginal distributions) for trees of any size. We also obtained insights into the statistical properties of trees generated under these two models, including a constant correlation between the cherry and the pitchfork distributions under the YHK model, and the log-concavity and unimodality of the cherry distributions under both models. In addition, we show that there exists a unique change point for the cherry distributions between these two models. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Modeling coverage gaps in haplotype frequencies via Bayesian inference to improve stem cell donor selection.

    PubMed

    Louzoun, Yoram; Alter, Idan; Gragert, Loren; Albrecht, Mark; Maiers, Martin

    2018-05-01

    Regardless of sampling depth, accurate genotype imputation is limited in regions of high polymorphism which often have a heavy-tailed haplotype frequency distribution. Many rare haplotypes are thus unobserved. Statistical methods to improve imputation by extending reference haplotype distributions using linkage disequilibrium patterns that relate allele and haplotype frequencies have not yet been explored. In the field of unrelated stem cell transplantation, imputation of highly polymorphic human leukocyte antigen (HLA) genes has an important application in identifying the best-matched stem cell donor when searching large registries totaling over 28,000,000 donors worldwide. Despite these large registry sizes, a significant proportion of searched patients present novel HLA haplotypes. Supporting this observation, HLA population genetic models have indicated that many extant HLA haplotypes remain unobserved. The absent haplotypes are a significant cause of error in haplotype matching. We have applied a Bayesian inference methodology for extending haplotype frequency distributions, using a model where new haplotypes are created by recombination of observed alleles. Applications of this joint probability model offer significant improvement in frequency distribution estimates over the best existing alternative methods, as we illustrate using five-locus HLA frequency data from the National Marrow Donor Program registry. Transplant matching algorithms and disease association studies involving phasing and imputation of rare variants may benefit from this statistical inference framework.

  18. Stochastic Computations in Cortical Microcircuit Models

    PubMed Central

    Maass, Wolfgang

    2013-01-01

    Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving. PMID:24244126

  19. Probabilistic SSME blades structural response under random pulse loading

    NASA Technical Reports Server (NTRS)

    Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.

    1987-01-01

    The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.

  20. Low-Rank Discriminant Embedding for Multiview Learning.

    PubMed

    Li, Jingjing; Wu, Yue; Zhao, Jidong; Lu, Ke

    2017-11-01

    This paper focuses on the specific problem of multiview learning where samples have the same feature set but different probability distributions, e.g., different viewpoints or different modalities. Since samples lying in different distributions cannot be compared directly, this paper aims to learn a latent subspace shared by multiple views assuming that the input views are generated from this latent subspace. Previous approaches usually learn the common subspace by either maximizing the empirical likelihood, or preserving the geometric structure. However, considering the complementarity between the two objectives, this paper proposes a novel approach, named low-rank discriminant embedding (LRDE), for multiview learning by taking full advantage of both sides. By further considering the duality between data points and features of multiview scene, i.e., data points can be grouped based on their distribution on features, while features can be grouped based on their distribution on the data points, LRDE not only deploys low-rank constraints on both sample level and feature level to dig out the shared factors across different views, but also preserves geometric information in both the ambient sample space and the embedding feature space by designing a novel graph structure under the framework of graph embedding. Finally, LRDE jointly optimizes low-rank representation and graph embedding in a unified framework. Comprehensive experiments in both multiview manner and pairwise manner demonstrate that LRDE performs much better than previous approaches proposed in recent literatures.

  1. Ubiquity of Benford's law and emergence of the reciprocal distribution

    DOE PAGES

    Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.

    2016-04-07

    In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less

  2. Distributed Control by Lagrangian Steepest Descent

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Bieniawski, Stefan

    2004-01-01

    Often adaptive, distributed control can be viewed as an iterated game between independent players. The coupling between the players mixed strategies, arising as the system evolves from one instant to the next, is determined by the system designer. Information theory tells us that the most likely joint strategy of the players, given a value of the expectation of the overall control objective function, is the minimizer of a function o the joint strategy. So the goal of the system designer is to speed evolution of the joint strategy to that Lagrangian mhimbhgpoint,lowerthe expectated value of the control objective function, and repeat Here we elaborate the theory of algorithms that do this using local descent procedures, and that thereby achieve efficient, adaptive, distributed control.

  3. Joint modelling of longitudinal CEA tumour marker progression and survival data on breast cancer

    NASA Astrophysics Data System (ADS)

    Borges, Ana; Sousa, Inês; Castro, Luis

    2017-06-01

    This work proposes the use of Biostatistics methods to study breast cancer in patients of Braga's Hospital Senology Unit, located in Portugal. The primary motivation is to contribute to the understanding of the progression of breast cancer, within the Portuguese population, using a more complex statistical model assumptions than the traditional analysis that take into account a possible existence of a serial correlation structure within a same subject observations. We aim to infer which risk factors aect the survival of Braga's Hospital patients, diagnosed with breast tumour. Whilst analysing risk factors that aect a tumour markers used on the surveillance of disease progression the Carcinoembryonic antigen (CEA). As survival and longitudinal processes may be associated, it is important to model these two processes together. Hence, a joint modelling of these two processes to infer on the association of these was conducted. A data set of 540 patients, along with 50 variables, was collected from medical records of the Hospital. A joint model approach was used to analyse these data. Two dierent joint models were applied to the same data set, with dierent parameterizations which give dierent interpretations to model parameters. These were used by convenience as the ones implemented in R software. Results from the two models were compared. Results from joint models, showed that the longitudinal CEA values were signicantly associated with the survival probability of these patients. A comparison between parameter estimates obtained in this analysis and previous independent survival[4] and longitudinal analysis[5][6], lead us to conclude that independent analysis brings up bias parameter estimates. Hence, an assumption of association between the two processes in a joint model of breast cancer data is necessary. Results indicate that the longitudinal progression of CEA is signicantly associated with the probability of survival of these patients. Hence, an assumption of association between the two processes in a joint model of breast cancer data is necessary.

  4. Optimal Joint Remote State Preparation of Arbitrary Equatorial Multi-qudit States

    NASA Astrophysics Data System (ADS)

    Cai, Tao; Jiang, Min

    2017-03-01

    As an important communication technology, quantum information transmission plays an important role in the future network communication. It involves two kinds of transmission ways: quantum teleportation and remote state preparation. In this paper, we put forward a new scheme for optimal joint remote state preparation (JRSP) of an arbitrary equatorial two-qudit state with hybrid dimensions. Moreover, the receiver can reconstruct the target state with 100 % success probability in a deterministic manner via two spatially separated senders. Based on it, we can extend it to joint remote preparation of arbitrary equatorial multi-qudit states with hybrid dimensions using the same strategy.

  5. Implicit Learning of Predictive Relationships in Three-element Visual Sequences by Young and Old Adults

    PubMed Central

    Howard, James H.; Howard, Darlene V.; Dennis, Nancy A.; Kelly, Andrew J.

    2008-01-01

    Knowledge of sequential relationships enables future events to be anticipated and processed efficiently. Research with the serial reaction time task (SRTT) has shown that sequence learning often occurs implicitly without effort or awareness. Here we report four experiments that use a triplet-learning task (TLT) to investigate sequence learning in young and older adults. In the TLT people respond only to the last target event in a series of discrete, three-event sequences or triplets. Target predictability is manipulated by varying the triplet frequency (joint probability) and/or the statistical relationships (conditional probabilities) among events within the triplets. Results revealed that both groups learned, though older adults showed less learning of both joint and conditional probabilities. Young people used the statistical information in both cues, but older adults relied primarily on information in the second cue alone. We conclude that the TLT complements and extends the SRTT and other tasks by offering flexibility in the kinds of sequential statistical regularities that may be studied as well as by controlling event timing and eliminating motor response sequencing. PMID:18763897

  6. Neuroanatomical distribution of mechanoreceptors in the human cadaveric shoulder capsule and labrum

    PubMed Central

    Witherspoon, Jessica W; Smirnova, Irina V; McIff, Terence E

    2014-01-01

    The distribution, location, and spatial arrangement of mechanoreceptors are important for neural signal conciseness and accuracy in proprioceptive information required to maintain functional joint stability. The glenohumeral joint capsule and labrum are mechanoreceptor-containing tissues for which the distribution of mechanoreceptors has not been determined despite the importance of these tissues in stabilizing the shoulder. More recently, it has been shown that damage to articular mechanoreceptors can result in proprioceptive deficits that may lead to recurrent instability. Awareness of mechanoreceptor distribution in the glenohumeral joint capsule and labrum may allow preservation of the mechanoreceptors during surgical treatment for shoulder instability, and in turn retain the joint's proprioceptive integrity. For this reason, we sought to develop a neuroanatomical map of the mechanoreceptors within the capsule and labrum. We postulated that the mechanoreceptors in these tissues are distributed in a unique pattern, with mechanoreceptor-scarce regions that may be more appropriate for surgical dissection. We determined the neuroanatomical distribution of mechanoreceptors and their associated fascicles in the capsule and labrum from eight human cadaver shoulder pairs using our improved gold chloride staining technique and light microscopy. A distribution pattern was consistently observed in the capsule and labrum from which we derived a neuroanatomical map. Both tissues demonstrated mechanoreceptor-dense and -scarce regions that may be considered during surgical treatment for instability. Capsular fascicles were located in the subsynovial layer, whereas labral fascicles were concentrated in the peri-core zone. The capsular fascicles presented as a lattice network and with a plexiform appearance. Fascicles within the labrum resembled a cable structure with the fascicles running in parallel. Our findings contribute to the neuroanatomical knowledge of the two glenohumeral joint stabilizers, namely, capsule and labrum, primarily involved in the onset of shoulder instability and recurrent instability. Neuroanatomical knowledge of articular mechanoreceptors is important for (i) developing a topographical map that reflects correspondence between the joint and surrounding musculature, (ii) understanding proprioceptive deficits that are only partially restored post surgical and post rehabilitative treatment, and (iii) gaining further knowledge about articular mechanoreceptors. PMID:25040358

  7. Improvement in Fatigue Performance of Aluminium Alloy Welded Joints by Laser Shock Peening in a Dynamic Strain Aging Temperature Regime.

    PubMed

    Su, Chun; Zhou, Jianzhong; Meng, Xiankai; Huang, Shu

    2016-09-26

    As a new treatment process after welding, the process parameters of laser shock peening (LSP) in dynamic strain aging (DSA) temperature regimes can be precisely controlled, and the process is a non-contact one. The effects of LSP at elevated temperatures on the distribution of the surface residual stress of AA6061-T6 welded joints were investigated by using X-ray diffraction technology with the sin² ϕ method and Abaqus software. The fatigue life of the welded joints was estimated by performing tensile fatigue tests. The microstructural evolution in surface and fatigue fractures of the welded joints was presented by means of surface integrity and fracture surface testing. In the DSA temperature regime of AA6061-T6 welded joints, the residual compressive stress was distributed more stably than that of LSP at room temperature. The thermal corrosion resistance and fatigue properties of the welded joints were also improved. The experimental results and numerical analysis were in mutual agreement.

  8. Improvement in Fatigue Performance of Aluminium Alloy Welded Joints by Laser Shock Peening in a Dynamic Strain Aging Temperature Regime

    PubMed Central

    Su, Chun; Zhou, Jianzhong; Meng, Xiankai; Huang, Shu

    2016-01-01

    As a new treatment process after welding, the process parameters of laser shock peening (LSP) in dynamic strain aging (DSA) temperature regimes can be precisely controlled, and the process is a non-contact one. The effects of LSP at elevated temperatures on the distribution of the surface residual stress of AA6061-T6 welded joints were investigated by using X-ray diffraction technology with the sin2ϕ method and Abaqus software. The fatigue life of the welded joints was estimated by performing tensile fatigue tests. The microstructural evolution in surface and fatigue fractures of the welded joints was presented by means of surface integrity and fracture surface testing. In the DSA temperature regime of AA6061-T6 welded joints, the residual compressive stress was distributed more stably than that of LSP at room temperature. The thermal corrosion resistance and fatigue properties of the welded joints were also improved. The experimental results and numerical analysis were in mutual agreement. PMID:28773920

  9. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  10. Spatial Probability Distribution of Strata's Lithofacies and its Impacts on Land Subsidence in Huairou Emergency Water Resources Region of Beijing

    NASA Astrophysics Data System (ADS)

    Li, Y.; Gong, H.; Zhu, L.; Guo, L.; Gao, M.; Zhou, C.

    2016-12-01

    Continuous over-exploitation of groundwater causes dramatic drawdown, and leads to regional land subsidence in the Huairou Emergency Water Resources region, which is located in the up-middle part of the Chaobai river basin of Beijing. Owing to the spatial heterogeneity of strata's lithofacies of the alluvial fan, ground deformation has no significant positive correlation with groundwater drawdown, and one of the challenges ahead is to quantify the spatial distribution of strata's lithofacies. The transition probability geostatistics approach provides potential for characterizing the distribution of heterogeneous lithofacies in the subsurface. Combined the thickness of clay layer extracted from the simulation, with deformation field acquired from PS-InSAR technology, the influence of strata's lithofacies on land subsidence can be analyzed quantitatively. The strata's lithofacies derived from borehole data were generalized into four categories and their probability distribution in the observe space was mined by using the transition probability geostatistics, of which clay was the predominant compressible material. Geologically plausible realizations of lithofacies distribution were produced, accounting for complex heterogeneity in alluvial plain. At a particular probability level of more than 40 percent, the volume of clay defined was 55 percent of the total volume of strata's lithofacies. This level, equaling nearly the volume of compressible clay derived from the geostatistics, was thus chosen to represent the boundary between compressible and uncompressible material. The method incorporates statistical geological information, such as distribution proportions, average lengths and juxtaposition tendencies of geological types, mainly derived from borehole data and expert knowledge, into the Markov chain model of transition probability. Some similarities of patterns were indicated between the spatial distribution of deformation field and clay layer. In the area with roughly similar water table decline, locations in the subsurface having a higher probability for the existence of compressible material occur more than that in the location with a lower probability. Such estimate of spatial probability distribution is useful to analyze the uncertainty of land subsidence.

  11. The exact probability distribution of the rank product statistics for replicated experiments.

    PubMed

    Eisinga, Rob; Breitling, Rainer; Heskes, Tom

    2013-03-18

    The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  12. Double density dynamics: realizing a joint distribution of a physical system and a parameter system

    NASA Astrophysics Data System (ADS)

    Fukuda, Ikuo; Moritsugu, Kei

    2015-11-01

    To perform a variety of types of molecular dynamics simulations, we created a deterministic method termed ‘double density dynamics’ (DDD), which realizes an arbitrary distribution for both physical variables and their associated parameters simultaneously. Specifically, we constructed an ordinary differential equation that has an invariant density relating to a joint distribution of the physical system and the parameter system. A generalized density function leads to a physical system that develops under nonequilibrium environment-describing superstatistics. The joint distribution density of the physical system and the parameter system appears as the Radon-Nikodym derivative of a distribution that is created by a scaled long-time average, generated from the flow of the differential equation under an ergodic assumption. The general mathematical framework is fully discussed to address the theoretical possibility of our method, and a numerical example representing a 1D harmonic oscillator is provided to validate the method being applied to the temperature parameters.

  13. [Endoprostheses in geriatric traumatology].

    PubMed

    Buecking, B; Eschbach, D; Bliemel, C; Knobe, M; Aigner, R; Ruchholtz, S

    2017-01-01

    Geriatric traumatology is increasing in importance due to the demographic transition. In cases of fractures close to large joints it is questionable whether primary joint replacement is advantageous compared to joint-preserving internal fixation. The aim of this study was to describe the importance of prosthetic joint replacement in the treatment of geriatric patients suffering from frequent periarticular fractures in comparison to osteosynthetic joint reconstruction and conservative methods. A selective search of the literature was carried out to identify studies and recommendations concerned with primary arthroplasty of fractures in the region of the various joints (hip, shoulder, elbow and knee). The importance of primary arthroplasty in geriatric traumatology differs greatly between the various joints. Implantation of a prosthesis has now become the gold standard for displaced fractures of the femoral neck. In addition, reverse shoulder arthroplasty has become an established alternative option to osteosynthesis in the treatment of complex proximal humeral fractures. Due to a lack of large studies definitive recommendations cannot yet be given for fractures around the elbow and the knee. Nowadays, joint replacement for these fractures is recommended only if reconstruction of the joint surface is not possible. The importance of primary joint replacement for geriatric fractures will probably increase in the future. Further studies with larger patient numbers must be conducted to achieve more confidence in decision making between joint replacement and internal fixation especially for shoulder, elbow and knee joints.

  14. Morphine Injection

    MedlinePlus

    ... back or joint pain; widening of the pupils; irritability; anxiety; weakness; stomach cramps; difficulty falling asleep or staying asleep; nausea; loss of appetite; vomiting; diarrhea; fast breathing; or fast heartbeat. Your doctor will probably decrease your dose gradually.

  15. Meperidine Injection

    MedlinePlus

    ... back or joint pain; widening of the pupils; irritability; anxiety; weakness; stomach cramps; difficulty falling asleep or staying asleep; nausea; loss of appetite; vomiting; diarrhea; fast breathing; or fast heartbeat. Your doctor will probably decrease your dose gradually.

  16. Glossary of Foot and Ankle Terms

    MedlinePlus

    ... or she will probably outgrow the condition naturally. Inversion - Twisting in toward the midline of the body. ... with the leg; the subtalar joint, which allows inversion and eversion of the foot with the leg; ...

  17. The Bayesian Approach to Association

    NASA Astrophysics Data System (ADS)

    Arora, N. S.

    2017-12-01

    The Bayesian approach to Association focuses mainly on quantifying the physics of the domain. In the case of seismic association for instance let X be the set of all significant events (above some threshold) and their attributes, such as location, time, and magnitude, Y1 be the set of detections that are caused by significant events and their attributes such as seismic phase, arrival time, amplitude etc., Y2 be the set of detections that are not caused by significant events, and finally Y be the set of observed detections We would now define the joint distribution P(X, Y1, Y2, Y) = P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2) ; where the last term simply states that Y1 and Y2 are a partitioning of Y. Given the above joint distribution the inference problem is simply to find the X, Y1, and Y2 that maximizes posterior probability P(X, Y1, Y2| Y) which reduces to maximizing P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2). In this expression P(X) captures our prior belief about event locations. P(Y1 | X) captures notions of travel time, residual error distributions as well as detection and mis-detection probabilities. While P(Y2) captures the false detection rate of our seismic network. The elegance of this approach is that all of the assumptions are stated clearly in the model for P(X), P(Y1|X) and P(Y2). The implementation of the inference is merely a by-product of this model. In contrast some of the other methods such as GA hide a number of assumptions in the implementation details of the inference - such as the so called "driver cells." The other important aspect of this approach is that all seismic knowledge including knowledge from other domains such as infrasound and hydroacoustic can be included in the same model. So, we don't need to separately account for misdetections or merge seismic and infrasound events as a separate step. Finally, it should be noted that the objective of automatic association is to simplify the job of humans who are publishing seismic bulletins based on this output. The error metric for association should accordingly count errors such as missed events much higher than spurious events because the former require more work from humans. Furthermore, the error rate needs to be weighted higher during periods of high seismicity such as an aftershock sequence when the human effort tends to increase.

  18. A joint probability approach for coincidental flood frequency analysis at ungauged basin confluences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Cheng

    2016-03-12

    A reliable and accurate flood frequency analysis at the confluence of streams is of importance. Given that long-term peak flow observations are often unavailable at tributary confluences, at a practical level, this paper presents a joint probability approach (JPA) to address the coincidental flood frequency analysis at the ungauged confluence of two streams based on the flow rate data from the upstream tributaries. One case study is performed for comparison against several traditional approaches, including the position-plotting formula, the univariate flood frequency analysis, and the National Flood Frequency Program developed by US Geological Survey. It shows that the results generatedmore » by the JPA approach agree well with the floods estimated by the plotting position and univariate flood frequency analysis based on the observation data.« less

  19. Modeling the probability distribution of peak discharge for infiltrating hillslopes

    NASA Astrophysics Data System (ADS)

    Baiamonte, Giorgio; Singh, Vijay P.

    2017-07-01

    Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.

  20. Modeling Array Stations in SIG-VISA

    NASA Astrophysics Data System (ADS)

    Ding, N.; Moore, D.; Russell, S.

    2013-12-01

    We add support for array stations to SIG-VISA, a system for nuclear monitoring using probabilistic inference on seismic signals. Array stations comprise a large portion of the IMS network; they can provide increased sensitivity and more accurate directional information compared to single-component stations. Our existing model assumed that signals were independent at each station, which is false when lots of stations are close together, as in an array. The new model removes that assumption by jointly modeling signals across array elements. This is done by extending our existing Gaussian process (GP) regression models, also known as kriging, from a 3-dimensional single-component space of events to a 6-dimensional space of station-event pairs. For each array and each event attribute (including coda decay, coda height, amplitude transfer and travel time), we model the joint distribution across array elements using a Gaussian process that learns the correlation lengthscale across the array, thereby incorporating information of array stations into the probabilistic inference framework. To evaluate the effectiveness of our model, we perform ';probabilistic beamforming' on new events using our GP model, i.e., we compute the event azimuth having highest posterior probability under the model, conditioned on the signals at array elements. We compare the results from our probabilistic inference model to the beamforming currently performed by IMS station processing.

Top