Sample records for joint probability model

  1. Metocean design parameter estimation for fixed platform based on copula functions

    NASA Astrophysics Data System (ADS)

    Zhai, Jinjin; Yin, Qilin; Dong, Sheng

    2017-08-01

    Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.

  2. Bivariate categorical data analysis using normal linear conditional multinomial probability model.

    PubMed

    Sun, Bingrui; Sutradhar, Brajendra

    2015-02-10

    Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Joint probabilities and quantum cognition

    NASA Astrophysics Data System (ADS)

    de Barros, J. Acacio

    2012-12-01

    In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

  4. Joint modeling of longitudinal data and discrete-time survival outcome.

    PubMed

    Qiu, Feiyou; Stein, Catherine M; Elston, Robert C

    2016-08-01

    A predictive joint shared parameter model is proposed for discrete time-to-event and longitudinal data. A discrete survival model with frailty and a generalized linear mixed model for the longitudinal data are joined to predict the probability of events. This joint model focuses on predicting discrete time-to-event outcome, taking advantage of repeated measurements. We show that the probability of an event in a time window can be more precisely predicted by incorporating the longitudinal measurements. The model was investigated by comparison with a two-step model and a discrete-time survival model. Results from both a study on the occurrence of tuberculosis and simulated data show that the joint model is superior to the other models in discrimination ability, especially as the latent variables related to both survival times and the longitudinal measurements depart from 0. © The Author(s) 2013.

  5. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data.

    PubMed

    Buderman, Frances E; Diefenbach, Duane R; Casalena, Mary Jo; Rosenberry, Christopher S; Wallingford, Bret D

    2014-04-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50-100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo, to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.

  6. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data

    USGS Publications Warehouse

    Buderman, Frances E.; Diefenbach, Duane R.; Casalena, Mary Jo; Rosenberry, Christopher S.; Wallingford, Bret D.

    2014-01-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo,to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.

  7. Copula Models for Sociology: Measures of Dependence and Probabilities for Joint Distributions

    ERIC Educational Resources Information Center

    Vuolo, Mike

    2017-01-01

    Often in sociology, researchers are confronted with nonnormal variables whose joint distribution they wish to explore. Yet, assumptions of common measures of dependence can fail or estimating such dependence is computationally intensive. This article presents the copula method for modeling the joint distribution of two random variables, including…

  8. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    PubMed

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  9. Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory

    NASA Astrophysics Data System (ADS)

    Rahimi, A.; Zhang, L.

    2012-12-01

    Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further assure that the entropy-based joint rainfall-runoff distribution are satisfactorily derived. Overall, the study shows the Shannon entropy theory can be satisfactorily applied to model the dependence between rainfall and runoff. The study also shows that the entropy-based joint distribution is an appropriate approach to capture the dependence structure that cannot be captured by the convenient bivariate joint distributions. Joint Rainfall-Runoff Entropy Based PDF, and Corresponding Marginal PDF and Histogram for W12 Watershed The K-S Test Result and RMSE on Univariate Distributions Derived from the Maximum Entropy Based Joint Probability Distribution;

  10. Joint Probability Analysis of Extreme Precipitation and Storm Tide in a Coastal City under Changing Environment

    PubMed Central

    Xu, Kui; Ma, Chao; Lian, Jijian; Bin, Lingling

    2014-01-01

    Catastrophic flooding resulting from extreme meteorological events has occurred more frequently and drawn great attention in recent years in China. In coastal areas, extreme precipitation and storm tide are both inducing factors of flooding and therefore their joint probability would be critical to determine the flooding risk. The impact of storm tide or changing environment on flooding is ignored or underestimated in the design of drainage systems of today in coastal areas in China. This paper investigates the joint probability of extreme precipitation and storm tide and its change using copula-based models in Fuzhou City. The change point at the year of 1984 detected by Mann-Kendall and Pettitt’s tests divides the extreme precipitation series into two subsequences. For each subsequence the probability of the joint behavior of extreme precipitation and storm tide is estimated by the optimal copula. Results show that the joint probability has increased by more than 300% on average after 1984 (α = 0.05). The design joint return period (RP) of extreme precipitation and storm tide is estimated to propose a design standard for future flooding preparedness. For a combination of extreme precipitation and storm tide, the design joint RP has become smaller than before. It implies that flooding would happen more often after 1984, which corresponds with the observation. The study would facilitate understanding the change of flood risk and proposing the adaption measures for coastal areas under a changing environment. PMID:25310006

  11. Joint probability analysis of extreme precipitation and storm tide in a coastal city under changing environment.

    PubMed

    Xu, Kui; Ma, Chao; Lian, Jijian; Bin, Lingling

    2014-01-01

    Catastrophic flooding resulting from extreme meteorological events has occurred more frequently and drawn great attention in recent years in China. In coastal areas, extreme precipitation and storm tide are both inducing factors of flooding and therefore their joint probability would be critical to determine the flooding risk. The impact of storm tide or changing environment on flooding is ignored or underestimated in the design of drainage systems of today in coastal areas in China. This paper investigates the joint probability of extreme precipitation and storm tide and its change using copula-based models in Fuzhou City. The change point at the year of 1984 detected by Mann-Kendall and Pettitt's tests divides the extreme precipitation series into two subsequences. For each subsequence the probability of the joint behavior of extreme precipitation and storm tide is estimated by the optimal copula. Results show that the joint probability has increased by more than 300% on average after 1984 (α = 0.05). The design joint return period (RP) of extreme precipitation and storm tide is estimated to propose a design standard for future flooding preparedness. For a combination of extreme precipitation and storm tide, the design joint RP has become smaller than before. It implies that flooding would happen more often after 1984, which corresponds with the observation. The study would facilitate understanding the change of flood risk and proposing the adaption measures for coastal areas under a changing environment.

  12. Quasi-Bell inequalities from symmetrized products of noncommuting qubit observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamel, Omar E.; Fleming, Graham R.

    Noncommuting observables cannot be simultaneously measured; however, under local hidden variable models, they must simultaneously hold premeasurement values, implying the existence of a joint probability distribution. We study the joint distributions of noncommuting observables on qubits, with possible criteria of positivity and the Fréchet bounds limiting the joint probabilities, concluding that the latter may be negative. We use symmetrization, justified heuristically and then more carefully via the Moyal characteristic function, to find the quantum operator corresponding to the product of noncommuting observables. This is then used to construct Quasi-Bell inequalities, Bell inequalities containing products of noncommuting observables, on two qubits.more » These inequalities place limits on the local hidden variable models that define joint probabilities for noncommuting observables. We also found that the Quasi-Bell inequalities have a quantum to classical violation as high as 3/2 on two qubit, higher than conventional Bell inequalities. Our result demonstrates the theoretical importance of noncommutativity in the nonlocality of quantum mechanics and provides an insightful generalization of Bell inequalities.« less

  13. Quasi-Bell inequalities from symmetrized products of noncommuting qubit observables

    DOE PAGES

    Gamel, Omar E.; Fleming, Graham R.

    2017-05-01

    Noncommuting observables cannot be simultaneously measured; however, under local hidden variable models, they must simultaneously hold premeasurement values, implying the existence of a joint probability distribution. We study the joint distributions of noncommuting observables on qubits, with possible criteria of positivity and the Fréchet bounds limiting the joint probabilities, concluding that the latter may be negative. We use symmetrization, justified heuristically and then more carefully via the Moyal characteristic function, to find the quantum operator corresponding to the product of noncommuting observables. This is then used to construct Quasi-Bell inequalities, Bell inequalities containing products of noncommuting observables, on two qubits.more » These inequalities place limits on the local hidden variable models that define joint probabilities for noncommuting observables. We also found that the Quasi-Bell inequalities have a quantum to classical violation as high as 3/2 on two qubit, higher than conventional Bell inequalities. Our result demonstrates the theoretical importance of noncommutativity in the nonlocality of quantum mechanics and provides an insightful generalization of Bell inequalities.« less

  14. Application of multivariate Gaussian detection theory to known non-Gaussian probability density functions

    NASA Astrophysics Data System (ADS)

    Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.

    1995-06-01

    A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.

  15. Excluding joint probabilities from quantum theory

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Danageozian, Arshag

    2018-03-01

    Quantum theory does not provide a unique definition for the joint probability of two noncommuting observables, which is the next important question after the Born's probability for a single observable. Instead, various definitions were suggested, e.g., via quasiprobabilities or via hidden-variable theories. After reviewing open issues of the joint probability, we relate it to quantum imprecise probabilities, which are noncontextual and are consistent with all constraints expected from a quantum probability. We study two noncommuting observables in a two-dimensional Hilbert space and show that there is no precise joint probability that applies for any quantum state and is consistent with imprecise probabilities. This contrasts with theorems by Bell and Kochen-Specker that exclude joint probabilities for more than two noncommuting observables, in Hilbert space with dimension larger than two. If measurement contexts are included into the definition, joint probabilities are not excluded anymore, but they are still constrained by imprecise probabilities.

  16. Lamb wave-based damage quantification and probability of detection modeling for fatigue life assessment of riveted lap joint

    NASA Astrophysics Data System (ADS)

    He, Jingjing; Wang, Dengjiang; Zhang, Weifang

    2015-03-01

    This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.

  17. Joint coverage probability in a simulation study on Continuous-Time Markov Chain parameter estimation.

    PubMed

    Benoit, Julia S; Chan, Wenyaw; Doody, Rachelle S

    2015-01-01

    Parameter dependency within data sets in simulation studies is common, especially in models such as Continuous-Time Markov Chains (CTMC). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: 1) to develop a multivariate approach for assessing accuracy and precision for simulation studies 2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.

  18. A Bayesian joint probability modeling approach for seasonal forecasting of streamflows at multiple sites

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.

    2009-05-01

    Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.

  19. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  20. Context-invariant quasi hidden variable (qHV) modelling of all joint von Neumann measurements for an arbitrary Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loubenets, Elena R.

    We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less

  1. Analyzing multicomponent receptive fields from neural responses to natural stimuli

    PubMed Central

    Rowekamp, Ryan; Sharpee, Tatyana O

    2011-01-01

    The challenge of building increasingly better models of neural responses to natural stimuli is to accurately estimate the multiple stimulus features that may jointly affect the neural spike probability. The selectivity for combinations of features is thought to be crucial for achieving classical properties of neural responses such as contrast invariance. The joint search for these multiple stimulus features is difficult because estimating spike probability as a multidimensional function of stimulus projections onto candidate relevant dimensions is subject to the curse of dimensionality. An attractive alternative is to search for relevant dimensions sequentially, as in projection pursuit regression. Here we demonstrate using analytic arguments and simulations of model cells that different types of sequential search strategies exhibit systematic biases when used with natural stimuli. Simulations show that joint optimization is feasible for up to three dimensions with current algorithms. When applied to the responses of V1 neurons to natural scenes, models based on three jointly optimized dimensions had better predictive power in a majority of cases compared to dimensions optimized sequentially, with different sequential methods yielding comparable results. Thus, although the curse of dimensionality remains, at least several relevant dimensions can be estimated by joint information maximization. PMID:21780916

  2. Multi-hazard Assessment and Scenario Toolbox (MhAST): A Framework for Analyzing Compounding Effects of Multiple Hazards

    NASA Astrophysics Data System (ADS)

    Sadegh, M.; Moftakhari, H.; AghaKouchak, A.

    2017-12-01

    Many natural hazards are driven by multiple forcing variables, and concurrence/consecutive extreme events significantly increases risk of infrastructure/system failure. It is a common practice to use univariate analysis based upon a perceived ruling driver to estimate design quantiles and/or return periods of extreme events. A multivariate analysis, however, permits modeling simultaneous occurrence of multiple forcing variables. In this presentation, we introduce the Multi-hazard Assessment and Scenario Toolbox (MhAST) that comprehensively analyzes marginal and joint probability distributions of natural hazards. MhAST also offers a wide range of scenarios of return period and design levels and their likelihoods. Contribution of this study is four-fold: 1. comprehensive analysis of marginal and joint probability of multiple drivers through 17 continuous distributions and 26 copulas, 2. multiple scenario analysis of concurrent extremes based upon the most likely joint occurrence, one ruling variable, and weighted random sampling of joint occurrences with similar exceedance probabilities, 3. weighted average scenario analysis based on a expected event, and 4. uncertainty analysis of the most likely joint occurrence scenario using a Bayesian framework.

  3. PARTS: Probabilistic Alignment for RNA joinT Secondary structure prediction

    PubMed Central

    Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H.

    2008-01-01

    A novel method is presented for joint prediction of alignment and common secondary structures of two RNA sequences. The joint consideration of common secondary structures and alignment is accomplished by structural alignment over a search space defined by the newly introduced motif called matched helical regions. The matched helical region formulation generalizes previously employed constraints for structural alignment and thereby better accommodates the structural variability within RNA families. A probabilistic model based on pseudo free energies obtained from precomputed base pairing and alignment probabilities is utilized for scoring structural alignments. Maximum a posteriori (MAP) common secondary structures, sequence alignment and joint posterior probabilities of base pairing are obtained from the model via a dynamic programming algorithm called PARTS. The advantage of the more general structural alignment of PARTS is seen in secondary structure predictions for the RNase P family. For this family, the PARTS MAP predictions of secondary structures and alignment perform significantly better than prior methods that utilize a more restrictive structural alignment model. For the tRNA and 5S rRNA families, the richer structural alignment model of PARTS does not offer a benefit and the method therefore performs comparably with existing alternatives. For all RNA families studied, the posterior probability estimates obtained from PARTS offer an improvement over posterior probability estimates from a single sequence prediction. When considering the base pairings predicted over a threshold value of confidence, the combination of sensitivity and positive predictive value is superior for PARTS than for the single sequence prediction. PARTS source code is available for download under the GNU public license at http://rna.urmc.rochester.edu. PMID:18304945

  4. Climate Change Impact Assessment in Pacific North West Using Copula based Coupling of Temperature and Precipitation variables

    NASA Astrophysics Data System (ADS)

    Qin, Y.; Rana, A.; Moradkhani, H.

    2014-12-01

    The multi downscaled-scenario products allow us to better assess the uncertainty of the changes/variations of precipitation and temperature in the current and future periods. Joint Probability distribution functions (PDFs), of both the climatic variables, might help better understand the interdependence of the two, and thus in-turn help in accessing the future with confidence. Using the joint distribution of temperature and precipitation is also of significant importance in hydrological applications and climate change studies. In the present study, we have used multi-modelled statistically downscaled-scenario ensemble of precipitation and temperature variables using 2 different statistically downscaled climate dataset. The datasets used are, 10 Global Climate Models (GCMs) downscaled products from CMIP5 daily dataset, namely, those from the Bias Correction and Spatial Downscaling (BCSD) technique generated at Portland State University and from the Multivariate Adaptive Constructed Analogs (MACA) technique, generated at University of Idaho, leading to 2 ensemble time series from 20 GCM products. Thereafter the ensemble PDFs of both precipitation and temperature is evaluated for summer, winter, and yearly periods for all the 10 sub-basins across Columbia River Basin (CRB). Eventually, Copula is applied to establish the joint distribution of two variables enabling users to model the joint behavior of the variables with any level of correlation and dependency. Moreover, the probabilistic distribution helps remove the limitations on marginal distributions of variables in question. The joint distribution is then used to estimate the change trends of the joint precipitation and temperature in the current and future, along with estimation of the probabilities of the given change. Results have indicated towards varied change trends of the joint distribution of, summer, winter, and yearly time scale, respectively in all 10 sub-basins. Probabilities of changes, as estimated by the joint precipitation and temperature, will provide useful information/insights for hydrological and climate change predictions.

  5. A comprehensive model to determine the effects of temperature and species fluctuations on reactions in turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Antaki, P. J.

    1981-01-01

    The joint probability distribution function (pdf), which is a modification of the bivariate Gaussian pdf, is discussed and results are presented for a global reaction model using the joint pdf. An alternative joint pdf is discussed. A criterion which permits the selection of temperature pdf's in different regions of turbulent, reacting flow fields is developed. Two principal approaches to the determination of reaction rates in computer programs containing detailed chemical kinetics are outlined. These models represent a practical solution to the modeling of species reaction rates in turbulent, reacting flows.

  6. Healthy Eating and Leisure-Time Activity: Cross-Sectional Analysis of that Role of Work Environments in the U.S.

    PubMed

    Williams, Jessica A R; Arcaya, Mariana; Subramanian, S V

    2017-11-01

    The aim of this study was to evaluate relationships between work context and two health behaviors, healthy eating and leisure-time physical activity (LTPA), in U.S. adults. Using data from the 2010 National Health Interview Survey (NHIS) and Occupational Information Network (N = 14,863), we estimated a regression model to predict the marginal and joint probabilities of healthy eating and adhering to recommended exercise guidelines. Decision-making freedom was positively related to healthy eating and both behaviors jointly. Higher physical load was associated with a lower marginal probability of LTPA, healthy eating, and both behaviors jointly. Smoke and vapor exposures were negatively related to healthy eating and both behaviors. Chemical exposure was positively related to LTPA and both behaviors. Characteristics associated with marginal probabilities were not always predictive of joint outcomes. On the basis of nationwide occupation-specific evidence, workplace characteristics are important for healthy eating and LTPA.

  7. Joint modelling of longitudinal CEA tumour marker progression and survival data on breast cancer

    NASA Astrophysics Data System (ADS)

    Borges, Ana; Sousa, Inês; Castro, Luis

    2017-06-01

    This work proposes the use of Biostatistics methods to study breast cancer in patients of Braga's Hospital Senology Unit, located in Portugal. The primary motivation is to contribute to the understanding of the progression of breast cancer, within the Portuguese population, using a more complex statistical model assumptions than the traditional analysis that take into account a possible existence of a serial correlation structure within a same subject observations. We aim to infer which risk factors aect the survival of Braga's Hospital patients, diagnosed with breast tumour. Whilst analysing risk factors that aect a tumour markers used on the surveillance of disease progression the Carcinoembryonic antigen (CEA). As survival and longitudinal processes may be associated, it is important to model these two processes together. Hence, a joint modelling of these two processes to infer on the association of these was conducted. A data set of 540 patients, along with 50 variables, was collected from medical records of the Hospital. A joint model approach was used to analyse these data. Two dierent joint models were applied to the same data set, with dierent parameterizations which give dierent interpretations to model parameters. These were used by convenience as the ones implemented in R software. Results from the two models were compared. Results from joint models, showed that the longitudinal CEA values were signicantly associated with the survival probability of these patients. A comparison between parameter estimates obtained in this analysis and previous independent survival[4] and longitudinal analysis[5][6], lead us to conclude that independent analysis brings up bias parameter estimates. Hence, an assumption of association between the two processes in a joint model of breast cancer data is necessary. Results indicate that the longitudinal progression of CEA is signicantly associated with the probability of survival of these patients. Hence, an assumption of association between the two processes in a joint model of breast cancer data is necessary.

  8. The impact of joint responses of devices in an airport security system.

    PubMed

    Nie, Xiaofeng; Batta, Rajan; Drury, Colin G; Lin, Li

    2009-02-01

    In this article, we consider a model for an airport security system in which the declaration of a threat is based on the joint responses of inspection devices. This is in contrast to the typical system in which each check station independently declares a passenger as having a threat or not having a threat. In our framework the declaration of threat/no-threat is based upon the passenger scores at the check stations he/she goes through. To do this we use concepts from classification theory in the field of multivariate statistics analysis and focus on the main objective of minimizing the expected cost of misclassification. The corresponding correct classification and misclassification probabilities can be obtained by using a simulation-based method. After computing the overall false alarm and false clear probabilities, we compare our joint response system with two other independently operated systems. A model that groups passengers in a manner that minimizes the false alarm probability while maintaining the false clear probability within specifications set by a security authority is considered. We also analyze the staffing needs at each check station for such an inspection scheme. An illustrative example is provided along with sensitivity analysis on key model parameters. A discussion is provided on some implementation issues, on the various assumptions made in the analysis, and on potential drawbacks of the approach.

  9. On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1995-01-01

    For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.

  10. Understanding the joint behavior of temperature and precipitation for climate change impact studies

    NASA Astrophysics Data System (ADS)

    Rana, Arun; Moradkhani, Hamid; Qin, Yueyue

    2017-07-01

    The multiple downscaled scenario products allow us to assess the uncertainty of the variations of precipitation and temperature in the current and future periods. Probabilistic assessments of both climatic variables help better understand the interdependence of the two and thus, in turn, help in assessing the future with confidence. In the present study, we use ensemble of statistically downscaled precipitation and temperature from various models. The dataset used is multi-model ensemble of 10 global climate models (GCMs) downscaled product from CMIP5 daily dataset using the Bias Correction and Spatial Downscaling (BCSD) technique, generated at Portland State University. The multi-model ensemble of both precipitation and temperature is evaluated for dry and wet periods for 10 sub-basins across Columbia River Basin (CRB). Thereafter, copula is applied to establish the joint distribution of two variables on multi-model ensemble data. The joint distribution is then used to estimate the change in trends of said variables in future, along with estimation of the probabilities of the given change. The joint distribution trends vary, but certainly positive, for dry and wet periods in sub-basins of CRB. Dry season, generally, is indicating a higher positive change in precipitation than temperature (as compared to historical) across sub-basins with wet season inferring otherwise. Probabilities of changes in future, as estimated from the joint distribution, indicate varied degrees and forms during dry season whereas the wet season is rather constant across all the sub-basins.

  11. Joint estimation over multiple individuals improves behavioural state inference from animal movement data.

    PubMed

    Jonsen, Ian

    2016-02-08

    State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone.

  12. Numerical simulation of artificial hip joint motion based on human age factor

    NASA Astrophysics Data System (ADS)

    Ramdhani, Safarudin; Saputra, Eko; Jamari, J.

    2018-05-01

    Artificial hip joint is a prosthesis (synthetic body part) which usually consists of two or more components. Replacement of the hip joint due to the occurrence of arthritis, ordinarily patients aged or older. Numerical simulation models are used to observe the range of motion in the artificial hip joint, the range of motion of joints used as the basis of human age. Finite- element analysis (FEA) is used to calculate stress von mises in motion and observes a probability of prosthetic impingement. FEA uses a three-dimensional nonlinear model and considers the position variation of acetabular liner cups. The result of numerical simulation shows that FEA method can be used to analyze the performance calculation of the artificial hip joint at this time more accurate than conventional method.

  13. optBINS: Optimal Binning for histograms

    NASA Astrophysics Data System (ADS)

    Knuth, Kevin H.

    2018-03-01

    optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.

  14. LES/PDF studies of joint statistics of mixture fraction and progress variable in piloted methane jet flames with inhomogeneous inlet flows

    NASA Astrophysics Data System (ADS)

    Zhang, Pei; Barlow, Robert; Masri, Assaad; Wang, Haifeng

    2016-11-01

    The mixture fraction and progress variable are often used as independent variables for describing turbulent premixed and non-premixed flames. There is a growing interest in using these two variables for describing partially premixed flames. The joint statistical distribution of the mixture fraction and progress variable is of great interest in developing models for partially premixed flames. In this work, we conduct predictive studies of the joint statistics of mixture fraction and progress variable in a series of piloted methane jet flames with inhomogeneous inlet flows. The employed models combine large eddy simulations with the Monte Carlo probability density function (PDF) method. The joint PDFs and marginal PDFs are examined in detail by comparing the model predictions and the measurements. Different presumed shapes of the joint PDFs are also evaluated.

  15. The return period analysis of natural disasters with statistical modeling of bivariate joint probability distribution.

    PubMed

    Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng

    2013-01-01

    New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis. © 2012 Society for Risk Analysis.

  16. Height probabilities in the Abelian sandpile model on the generalized finite Bethe lattice

    NASA Astrophysics Data System (ADS)

    Chen, Haiyan; Zhang, Fuji

    2013-08-01

    In this paper, we study the sandpile model on the generalized finite Bethe lattice with a particular boundary condition. Using a combinatorial method, we give the exact expressions for all single-site probabilities and some two-site joint probabilities. As a by-product, we prove that the height probabilities of bulk vertices are all the same for the Bethe lattice with certain given boundary condition, which was found from numerical evidence by Grassberger and Manna ["Some more sandpiles," J. Phys. (France) 51, 1077-1098 (1990)], 10.1051/jphys:0199000510110107700 but without a proof.

  17. A linear programming model for protein inference problem in shotgun proteomics.

    PubMed

    Huang, Ting; He, Zengyou

    2012-11-15

    Assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is an important issue in shotgun proteomics. The objective of protein inference is to find a subset of proteins that are truly present in the sample. Although many methods have been proposed for protein inference, several issues such as peptide degeneracy still remain unsolved. In this article, we present a linear programming model for protein inference. In this model, we use a transformation of the joint probability that each peptide/protein pair is present in the sample as the variable. Then, both the peptide probability and protein probability can be expressed as a formula in terms of the linear combination of these variables. Based on this simple fact, the protein inference problem is formulated as an optimization problem: minimize the number of proteins with non-zero probabilities under the constraint that the difference between the calculated peptide probability and the peptide probability generated from peptide identification algorithms should be less than some threshold. This model addresses the peptide degeneracy issue by forcing some joint probability variables involving degenerate peptides to be zero in a rigorous manner. The corresponding inference algorithm is named as ProteinLP. We test the performance of ProteinLP on six datasets. Experimental results show that our method is competitive with the state-of-the-art protein inference algorithms. The source code of our algorithm is available at: https://sourceforge.net/projects/prolp/. zyhe@dlut.edu.cn. Supplementary data are available at Bioinformatics Online.

  18. Joint model-based clustering of nonlinear longitudinal trajectories and associated time-to-event data analysis, linked by latent class membership: with application to AIDS clinical studies.

    PubMed

    Huang, Yangxin; Lu, Xiaosun; Chen, Jiaqing; Liang, Juan; Zangmeister, Miriam

    2017-10-27

    Longitudinal and time-to-event data are often observed together. Finite mixture models are currently used to analyze nonlinear heterogeneous longitudinal data, which, by releasing the homogeneity restriction of nonlinear mixed-effects (NLME) models, can cluster individuals into one of the pre-specified classes with class membership probabilities. This clustering may have clinical significance, and be associated with clinically important time-to-event data. This article develops a joint modeling approach to a finite mixture of NLME models for longitudinal data and proportional hazard Cox model for time-to-event data, linked by individual latent class indicators, under a Bayesian framework. The proposed joint models and method are applied to a real AIDS clinical trial data set, followed by simulation studies to assess the performance of the proposed joint model and a naive two-step model, in which finite mixture model and Cox model are fitted separately.

  19. Investigation of the relation between the return periods of major drought characteristics using copula functions

    NASA Astrophysics Data System (ADS)

    Hüsami Afşar, Mehdi; Unal Şorman, Ali; Tugrul Yilmaz, Mustafa

    2016-04-01

    Different drought characteristics (e.g. duration, average severity, and average areal extent) often have monotonic relation that increased magnitude of one often follows a similar increase in the magnitude of the other drought characteristic. Hence it is viable to establish a relationship between different drought characteristics with the goal of predicting one using other ones. Copula functions that relate different variables using their joint and conditional cumulative probability distributions are often used to statistically model the drought characteristics. In this study bivariate and trivariate joint probabilities of these characteristics are obtained over Ankara (Turkey) between 1960 and 2013. Copula-based return period estimation of drought characteristics of duration, average severity, and average areal extent show joint probabilities of these characteristics can be satisfactorily achieved. Among different copula families investigated in this study, elliptical family (i.e. including normal and t-student copula functions) resulted in the lowest root mean square error. "This study was supported by TUBITAK fund #114Y676)."

  20. Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR)

    NASA Astrophysics Data System (ADS)

    Peters, Christina; Malz, Alex; Hlozek, Renée

    2018-01-01

    The Bayesian Estimation Applied to Multiple Species (BEAMS) framework employs probabilistic supernova type classifications to do photometric SN cosmology. This work extends BEAMS to replace high-confidence spectroscopic redshifts with photometric redshift probability density functions, a capability that will be essential in the era the Large Synoptic Survey Telescope and other next-generation photometric surveys where it will not be possible to perform spectroscopic follow up on every SN. We present the Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR) Bayesian hierarchical model for constraining the cosmological parameters from photometric lightcurves and host galaxy photometry, which includes selection effects and is extensible to uncertainty in the redshift-dependent supernova type proportions. We create a pair of realistic mock catalogs of joint posteriors over supernova type, redshift, and distance modulus informed by photometric supernova lightcurves and over redshift from simulated host galaxy photometry. We perform inference under our model to obtain a joint posterior probability distribution over the cosmological parameters and compare our results with other methods, namely: a spectroscopic subset, a subset of high probability photometrically classified supernovae, and reducing the photometric redshift probability to a single measurement and error bar.

  1. Idealized models of the joint probability distribution of wind speeds

    NASA Astrophysics Data System (ADS)

    Monahan, Adam H.

    2018-05-01

    The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.

  2. New approach in bivariate drought duration and severity analysis

    NASA Astrophysics Data System (ADS)

    Montaseri, Majid; Amirataee, Babak; Rezaie, Hossein

    2018-04-01

    The copula functions have been widely applied as an advance technique to create joint probability distribution of drought duration and severity. The approach of data collection as well as the amount of data and dispersion of data series can last a significant impact on creating such joint probability distribution using copulas. Usually, such traditional analyses have shed an Unconnected Drought Runs (UDR) approach towards droughts. In other word, droughts with different durations would be independent of each other. Emphasis on such data collection method causes the omission of actual potentials of short-term extreme droughts located within a long-term UDR. Meanwhile, traditional method is often faced with significant gap in drought data series. However, a long-term UDR can be approached as a combination of short-term Connected Drought Runs (CDR). Therefore this study aims to evaluate systematically two UDR and CDR procedures in joint probability of drought duration and severity investigations. For this purpose, rainfall data (1971-2013) from 24 rain gauges in Lake Urmia basin, Iran were applied. Also, seven common univariate marginal distributions and seven types of bivariate copulas were examined. Compared to traditional approach, the results demonstrated a significant comparative advantage of the new approach. Such comparative advantages led to determine the correct copula function, more accurate estimation of copula parameter, more realistic estimation of joint/conditional probabilities of drought duration and severity and significant reduction in uncertainty for modeling.

  3. Improving the Rank Precision of Population Health Measures for Small Areas with Longitudinal and Joint Outcome Models

    PubMed Central

    Athens, Jessica K.; Remington, Patrick L.; Gangnon, Ronald E.

    2015-01-01

    Objectives The University of Wisconsin Population Health Institute has published the County Health Rankings since 2010. These rankings use population-based data to highlight health outcomes and the multiple determinants of these outcomes and to encourage in-depth health assessment for all United States counties. A significant methodological limitation, however, is the uncertainty of rank estimates, particularly for small counties. To address this challenge, we explore the use of longitudinal and pooled outcome data in hierarchical Bayesian models to generate county ranks with greater precision. Methods In our models we used pooled outcome data for three measure groups: (1) Poor physical and poor mental health days; (2) percent of births with low birth weight and fair or poor health prevalence; and (3) age-specific mortality rates for nine age groups. We used the fixed and random effects components of these models to generate posterior samples of rates for each measure. We also used time-series data in longitudinal random effects models for age-specific mortality. Based on the posterior samples from these models, we estimate ranks and rank quartiles for each measure, as well as the probability of a county ranking in its assigned quartile. Rank quartile probabilities for univariate, joint outcome, and/or longitudinal models were compared to assess improvements in rank precision. Results The joint outcome model for poor physical and poor mental health days resulted in improved rank precision, as did the longitudinal model for age-specific mortality rates. Rank precision for low birth weight births and fair/poor health prevalence based on the univariate and joint outcome models were equivalent. Conclusion Incorporating longitudinal or pooled outcome data may improve rank certainty, depending on characteristics of the measures selected. For measures with different determinants, joint modeling neither improved nor degraded rank precision. This approach suggests a simple way to use existing information to improve the precision of small-area measures of population health. PMID:26098858

  4. Joint search and sensor management for geosynchronous satellites

    NASA Astrophysics Data System (ADS)

    Zatezalo, A.; El-Fallah, A.; Mahler, R.; Mehra, R. K.; Pham, K.

    2008-04-01

    Joint search and sensor management for space situational awareness presents daunting scientific and practical challenges as it requires a simultaneous search for new, and the catalog update of the current space objects. We demonstrate a new approach to joint search and sensor management by utilizing the Posterior Expected Number of Targets (PENT) as the objective function, an observation model for a space-based EO/IR sensor, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker. Simulation and results using actual Geosynchronous Satellites are presented.

  5. Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model

    PubMed Central

    Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070

  6. Joint segmentation and deformable registration of brain scans guided by a tumor growth model.

    PubMed

    Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.

  7. Joint Analysis of Binomial and Continuous Traits with a Recursive Model: A Case Study Using Mortality and Litter Size of Pigs

    PubMed Central

    Varona, Luis; Sorensen, Daniel

    2014-01-01

    This work presents a model for the joint analysis of a binomial and a Gaussian trait using a recursive parametrization that leads to a computationally efficient implementation. The model is illustrated in an analysis of mortality and litter size in two breeds of Danish pigs, Landrace and Yorkshire. Available evidence suggests that mortality of piglets increased partly as a result of successful selection for total number of piglets born. In recent years there has been a need to decrease the incidence of mortality in pig-breeding programs. We report estimates of genetic variation at the level of the logit of the probability of mortality and quantify how it is affected by the size of the litter. Several models for mortality are considered and the best fits are obtained by postulating linear and cubic relationships between the logit of the probability of mortality and litter size, for Landrace and Yorkshire, respectively. An interpretation of how the presence of genetic variation affects the probability of mortality in the population is provided and we discuss and quantify the prospects of selecting for reduced mortality, without affecting litter size. PMID:24414548

  8. Discrete-time moment closure models for epidemic spreading in populations of interacting individuals.

    PubMed

    Frasca, Mattia; Sharkey, Kieran J

    2016-06-21

    Understanding the dynamics of spread of infectious diseases between individuals is essential for forecasting the evolution of an epidemic outbreak or for defining intervention policies. The problem is addressed by many approaches including stochastic and deterministic models formulated at diverse scales (individuals, populations) and different levels of detail. Here we consider discrete-time SIR (susceptible-infectious-removed) dynamics propagated on contact networks. We derive a novel set of 'discrete-time moment equations' for the probability of the system states at the level of individual nodes and pairs of nodes. These equations form a set which we close by introducing appropriate approximations of the joint probabilities appearing in them. For the example case of SIR processes, we formulate two types of model, one assuming statistical independence at the level of individuals and one at the level of pairs. From the pair-based model we then derive a model at the level of the population which captures the behavior of epidemics on homogeneous random networks. With respect to their continuous-time counterparts, the models include a larger number of possible transitions from one state to another and joint probabilities with a larger number of individuals. The approach is validated through numerical simulation over different network topologies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Bayesian Estimation of the DINA Model with Gibbs Sampling

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2015-01-01

    A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…

  10. Correlations between polarisation states of W particles in the reaction e - e +→ W - W + at LEP2 energies 189-209 GeV

    NASA Astrophysics Data System (ADS)

    Abdallah, J.; Abreu, P.; Adam, W.; Adzic, P.; Albrecht, T.; Alemany-Fernandez, R.; Allmendinger, T.; Allport, P. P.; Amaldi, U.; Amapane, N.; Amato, S.; Anashkin, E.; Andreazza, A.; Andringa, S.; Anjos, N.; Antilogus, P.; Apel, W.-D.; Arnoud, Y.; Ask, S.; Asman, B.; Augustin, J. E.; Augustinus, A.; Baillon, P.; Ballestrero, A.; Bambade, P.; Barbier, R.; Bardin, D.; Barker, G. J.; Baroncelli, A.; Battaglia, M.; Baubillier, M.; Becks, K.-H.; Begalli, M.; Behrmann, A.; Ben-Haim, E.; Benekos, N.; Benvenuti, A.; Berat, C.; Berggren, M.; Bertrand, D.; Besancon, M.; Besson, N.; Bloch, D.; Blom, M.; Bluj, M.; Bonesini, M.; Boonekamp, M.; Booth, P. S. L.; Borisov, G.; Botner, O.; Bouquet, B.; Bowcock, T. J. V.; Boyko, I.; Bracko, M.; Brenner, R.; Brodet, E.; Bruckman, P.; Brunet, J. M.; Buschbeck, B.; Buschmann, P.; Calvi, M.; Camporesi, T.; Canale, V.; Carena, F.; Castro, N.; Cavallo, F.; Chapkin, M.; Charpentier, Ph.; Checchia, P.; Chierici, R.; Chliapnikov, P.; Chudoba, J.; Chung, S. U.; Cieslik, K.; Collins, P.; Contri, R.; Cosme, G.; Cossutti, F.; Costa, M. J.; Crennell, D.; Cuevas, J.; D'Hondt, J.; da Silva, T.; da Silva, W.; Della Ricca, G.; de Angelis, A.; de Boer, W.; de Clercq, C.; de Lotto, B.; de Maria, N.; de Min, A.; de Paula, L.; di Ciaccio, L.; di Simone, A.; Doroba, K.; Drees, J.; Eigen, G.; Ekelof, T.; Ellert, M.; Elsing, M.; Espirito Santo, M. C.; Fanourakis, G.; Fassouliotis, D.; Feindt, M.; Fernandez, J.; Ferrer, A.; Ferro, F.; Flagmeyer, U.; Foeth, H.; Fokitis, E.; Fulda-Quenzer, F.; Fuster, J.; Gandelman, M.; Garcia, C.; Gavillet, Ph.; Gazis, E.; Gokieli, R.; Golob, B.; Gomez-Ceballos, G.; Goncalves, P.; Graziani, E.; Grosdidier, G.; Grzelak, K.; Guy, J.; Haag, C.; Hallgren, A.; Hamacher, K.; Hamilton, K.; Haug, S.; Hauler, F.; Hedberg, V.; Hennecke, M.; Hoffman, J.; Holmgren, S.-O.; Holt, P. J.; Houlden, M. A.; Jackson, J. N.; Jarlskog, G.; Jarry, P.; Jeans, D.; Johansson, E. K.; Jonsson, P.; Joram, C.; Jungermann, L.; Kapusta, F.; Katsanevas, S.; Katsoufis, E.; Kernel, G.; Kersevan, B. P.; Kerzel, U.; King, B. T.; Kjaer, N. J.; Kluit, P.; Kokkinias, P.; Kourkoumelis, C.; Kouznetsov, O.; Krumstein, Z.; Kucharczyk, M.; Lamsa, J.; Leder, G.; Ledroit, F.; Leinonen, L.; Leitner, R.; Lemonne, J.; Lepeltier, V.; Lesiak, T.; Liebig, W.; Liko, D.; Lipniacka, A.; Lopes, J. H.; Lopez, J. M.; Loukas, D.; Lutz, P.; Lyons, L.; MacNaughton, J.; Malek, A.; Maltezos, S.; Mandl, F.; Marco, J.; Marco, R.; Marechal, B.; Margoni, M.; Marin, J.-C.; Mariotti, C.; Markou, A.; Martinez-Rivero, C.; Masik, J.; Mastroyiannopoulos, N.; Matorras, F.; Matteuzzi, C.; Mazzucato, F.; Mazzucato, M.; McNulty, R.; Meroni, C.; Migliore, E.; Mitaroff, W.; Mjoernmark, U.; Moa, T.; Moch, M.; Moenig, K.; Monge, R.; Montenegro, J.; Moraes, D.; Moreno, S.; Morettini, P.; Mueller, U.; Muenich, K.; Mulders, M.; Mundim, L.; Murray, W.; Muryn, B.; Myatt, G.; Myklebust, T.; Nassiakou, M.; Navarria, F.; Nawrocki, K.; Nemecek, S.; Nicolaidou, R.; Nikolenko, M.; Oblakowska-Mucha, A.; Obraztsov, V.; Olshevski, A.; Onofre, A.; Orava, R.; Osterberg, K.; Ouraou, A.; Oyanguren, A.; Paganoni, M.; Paiano, S.; Palacios, J. P.; Palka, H.; Papadopoulou, Th. D.; Pape, L.; Parkes, C.; Parodi, F.; Parzefall, U.; Passeri, A.; Passon, O.; Peralta, L.; Perepelitsa, V.; Perrotta, A.; Petrolini, A.; Piedra, J.; Pieri, L.; Pierre, F.; Pimenta, M.; Piotto, E.; Podobnik, T.; Poireau, V.; Pol, M. E.; Polok, G.; Pozdniakov, V.; Pukhaeva, N.; Pullia, A.; Radojicic, D.; Rebecchi, P.; Rehn, J.; Reid, D.; Reinhardt, R.; Renton, P.; Richard, F.; Ridky, J.; Rivero, M.; Rodriguez, D.; Romero, A.; Ronchese, P.; Roudeau, P.; Rovelli, T.; Ruhlmann-Kleider, V.; Ryabtchikov, D.; Sadovsky, A.; Salmi, L.; Salt, J.; Sander, C.; Savoy-Navarro, A.; Schwickerath, U.; Sekulin, R.; Siebel, M.; Sisakian, A.; Smadja, G.; Smirnova, O.; Sokolov, A.; Sopczak, A.; Sosnowski, R.; Spassov, T.; Stanitzki, M.; Stocchi, A.; Strauss, J.; Stugu, B.; Szczekowski, M.; Szeptycka, M.; Szumlak, T.; Tabarelli, T.; Tegenfeldt, F.; Timmermans, J.; Tkatchev, L.; Tobin, M.; Todorovova, S.; Tome, B.; Tonazzo, A.; Tortosa, P.; Travnicek, P.; Treille, D.; Tristram, G.; Trochimczuk, M.; Troncon, C.; Turluer, M.-L.; Tyapkin, I. A.; Tyapkin, P.; Tzamarias, S.; Uvarov, V.; Valenti, G.; van Dam, P.; van Eldik, J.; van Remortel, N.; van Vulpen, I.; Vegni, G.; Veloso, F.; Venus, W.; Verdier, P.; Verzi, V.; Vilanova, D.; Vitale, L.; Vrba, V.; Wahlen, H.; Washbrook, A. J.; Weiser, C.; Wicke, D.; Wickens, J.; Wilkinson, G.; Winter, M.; Witek, M.; Yushchenko, O.; Zalewska, A.; Zalewski, P.; Zavrtanik, D.; Zhuravlov, V.; Zimin, N. I.; Zintchenko, A.; Zupan, M.

    2009-10-01

    In a study of the reaction e - e +→ W - W + with the DELPHI detector, the probabilities of the two W particles occurring in the joint polarisation states transverse-transverse ( TT), longitudinal-transverse plus transverse-longitudinal ( LT) and longitudinal-longitudinal ( LL) have been determined using the final states WW{rightarrow}lν qbar{q} ( l= e, μ). The two-particle joint polarisation probabilities, i.e. the spin density matrix elements ρ TT , ρ LT , ρ LL , are measured as functions of the W - production angle, θ _{W-}, at an average reaction energy of 198.2 GeV. Averaged over all \\cosθ_{W-}, the following joint probabilities are obtained: bar{ρ}_{TT}=(67±8)%, bar{ρ}_{LT}=(30±8)%, bar{ρ}_{LL}=(3±7)%. These results are in agreement with the Standard Model predictions of 63.0%, 28.9% and 8.1%, respectively. The related polarisation cross-sections σ TT , σ LT and σ LL are also presented.

  11. Impact of communities, health, and emotional-related factors on smoking use: comparison of joint modeling of mean and dispersion and Bayes' hierarchical models on add health survey.

    PubMed

    Pu, Jie; Fang, Di; Wilson, Jeffrey R

    2017-02-03

    The analysis of correlated binary data is commonly addressed through the use of conditional models with random effects included in the systematic component as opposed to generalized estimating equations (GEE) models that addressed the random component. Since the joint distribution of the observations is usually unknown, the conditional distribution is a natural approach. Our objective was to compare the fit of different binary models for correlated data in Tabaco use. We advocate that the joint modeling of the mean and dispersion may be at times just as adequate. We assessed the ability of these models to account for the intraclass correlation. In so doing, we concentrated on fitting logistic regression models to address smoking behaviors. Frequentist and Bayes' hierarchical models were used to predict conditional probabilities, and the joint modeling (GLM and GAM) models were used to predict marginal probabilities. These models were fitted to National Longitudinal Study of Adolescent to Adult Health (Add Health) data for Tabaco use. We found that people were less likely to smoke if they had higher income, high school or higher education and religious. Individuals were more likely to smoke if they had abused drug or alcohol, spent more time on TV and video games, and been arrested. Moreover, individuals who drank alcohol early in life were more likely to be a regular smoker. Children who experienced mistreatment from their parents were more likely to use Tabaco regularly. The joint modeling of the mean and dispersion models offered a flexible and meaningful method of addressing the intraclass correlation. They do not require one to identify random effects nor distinguish from one level of the hierarchy to the other. Moreover, once one can identify the significant random effects, one can obtain similar results to the random coefficient models. We found that the set of marginal models accounting for extravariation through the additional dispersion submodel produced similar results with regards to inferences and predictions. Moreover, both marginal and conditional models demonstrated similar predictive power.

  12. Joint genome-wide prediction in several populations accounting for randomness of genotypes: A hierarchical Bayes approach. I: Multivariate Gaussian priors for marker effects and derivation of the joint probability mass function of genotypes.

    PubMed

    Martínez, Carlos Alberto; Khare, Kshitij; Banerjee, Arunava; Elzo, Mauricio A

    2017-03-21

    It is important to consider heterogeneity of marker effects and allelic frequencies in across population genome-wide prediction studies. Moreover, all regression models used in genome-wide prediction overlook randomness of genotypes. In this study, a family of hierarchical Bayesian models to perform across population genome-wide prediction modeling genotypes as random variables and allowing population-specific effects for each marker was developed. Models shared a common structure and differed in the priors used and the assumption about residual variances (homogeneous or heterogeneous). Randomness of genotypes was accounted for by deriving the joint probability mass function of marker genotypes conditional on allelic frequencies and pedigree information. As a consequence, these models incorporated kinship and genotypic information that not only permitted to account for heterogeneity of allelic frequencies, but also to include individuals with missing genotypes at some or all loci without the need for previous imputation. This was possible because the non-observed fraction of the design matrix was treated as an unknown model parameter. For each model, a simpler version ignoring population structure, but still accounting for randomness of genotypes was proposed. Implementation of these models and computation of some criteria for model comparison were illustrated using two simulated datasets. Theoretical and computational issues along with possible applications, extensions and refinements were discussed. Some features of the models developed in this study make them promising for genome-wide prediction, the use of information contained in the probability distribution of genotypes is perhaps the most appealing. Further studies to assess the performance of the models proposed here and also to compare them with conventional models used in genome-wide prediction are needed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. An Integrated Model of Application, Admission, Enrollment, and Financial Aid

    ERIC Educational Resources Information Center

    DesJardins, Stephen L.; Ahlburg, Dennis A.; McCall, Brian Patrick

    2006-01-01

    We jointly model the application, admission, financial aid determination, and enrollment decision process. We find that expectations of admission affect application probabilities, financial aid expectations affect enrollment and application behavior, and deviations from aid expectations are strongly related to enrollment. We also conduct…

  14. The Efficacy of Using Diagrams When Solving Probability Word Problems in College

    ERIC Educational Resources Information Center

    Beitzel, Brian D.; Staley, Richard K.

    2015-01-01

    Previous experiments have shown a deleterious effect of visual representations on college students' ability to solve total- and joint-probability word problems. The present experiments used conditional-probability problems, known to be more difficult than total- and joint-probability problems. The diagram group was instructed in how to use tree…

  15. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  16. Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.

    PubMed

    Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi

    2015-10-01

    In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. © 2015 Society for Risk Analysis.

  17. Bayesian Finite Mixtures for Nonlinear Modeling of Educational Data.

    ERIC Educational Resources Information Center

    Tirri, Henry; And Others

    A Bayesian approach for finding latent classes in data is discussed. The approach uses finite mixture models to describe the underlying structure in the data and demonstrate that the possibility of using full joint probability models raises interesting new prospects for exploratory data analysis. The concepts and methods discussed are illustrated…

  18. Comment on "constructing quantum games from nonfactorizable joint probabilities".

    PubMed

    Frąckiewicz, Piotr

    2013-09-01

    In the paper [Phys. Rev. E 76, 061122 (2007)], the authors presented a way of playing 2 × 2 games so that players were able to exploit nonfactorizable joint probabilities respecting the nonsignaling principle (i.e., relativistic causality). We are going to prove, however, that the scheme does not generalize the games studied in the commented paper. Moreover, it allows the players to obtain nonclassical results even if the factorizable joint probabilities are used.

  19. Clustered multistate models with observation level random effects, mover-stayer effects and dynamic covariates: modelling transition intensities and sojourn times in a study of psoriatic arthritis.

    PubMed

    Yiu, Sean; Farewell, Vernon T; Tom, Brian D M

    2018-02-01

    In psoriatic arthritis, it is important to understand the joint activity (represented by swelling and pain) and damage processes because both are related to severe physical disability. The paper aims to provide a comprehensive investigation into both processes occurring over time, in particular their relationship, by specifying a joint multistate model at the individual hand joint level, which also accounts for many of their important features. As there are multiple hand joints, such an analysis will be based on the use of clustered multistate models. Here we consider an observation level random-effects structure with dynamic covariates and allow for the possibility that a subpopulation of patients is at minimal risk of damage. Such an analysis is found to provide further understanding of the activity-damage relationship beyond that provided by previous analyses. Consideration is also given to the modelling of mean sojourn times and jump probabilities. In particular, a novel model parameterization which allows easily interpretable covariate effects to act on these quantities is proposed.

  20. Spacing distribution functions for the one-dimensional point-island model with irreversible attachment

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2011-07-01

    We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.

  1. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    PubMed Central

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2014-01-01

    Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016

  2. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution.

    PubMed

    Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.

  3. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses.

    PubMed

    Fuller, Robert William; Wong, Tony E; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections.

  4. Application of Archimedean copulas to the analysis of drought decadal variation in China

    NASA Astrophysics Data System (ADS)

    Zuo, Dongdong; Feng, Guolin; Zhang, Zengping; Hou, Wei

    2017-12-01

    Based on daily precipitation data collected from 1171 stations in China during 1961-2015, the monthly standardized precipitation index was derived and used to extract two major drought characteristics which are drought duration and severity. Next, a bivariate joint model was established based on the marginal distributions of the two variables and Archimedean copula functions. The joint probability and return period were calculated to analyze the drought characteristics and decadal variation. According to the fit analysis, the Gumbel-Hougaard copula provided the best fit to the observed data. Based on four drought duration classifications and four severity classifications, the drought events were divided into 16 drought types according to the different combinations of duration and severity classifications, and the probability and return period were analyzed for different drought types. The results showed that the occurring probability of six common drought types (0 < D ≤ 1 and 0.5 < S ≤ 1, 1 < D ≤ 3 and 0.5 < S ≤ 1, 1 < D ≤ 3 and 1 < S ≤ 1.5, 1 < D ≤ 3 and 1.5 < S ≤ 2, 1 < D ≤ 3 and 2 < S, and 3 < D ≤ 6 and 2 < S) accounted for 76% of the total probability of all types. Moreover, due to their greater variation, two drought types were particularly notable, i.e., the drought types where D ≥ 6 and S ≥ 2. Analyzing the joint probability in different decades indicated that the location of the drought center had a distinctive stage feature, which cycled from north to northeast to southwest during 1961-2015. However, southwest, north, and northeast China had a higher drought risk. In addition, the drought situation in southwest China should be noted because the joint probability values, return period, and the analysis of trends in the drought duration and severity all indicated a considerable risk in recent years.

  5. Multivariate hydrological frequency analysis for extreme events using Archimedean copula. Case study: Lower Tunjuelo River basin (Colombia)

    NASA Astrophysics Data System (ADS)

    Gómez, Wilmar

    2017-04-01

    By analyzing the spatial and temporal variability of extreme precipitation events we can prevent or reduce the threat and risk. Many water resources projects require joint probability distributions of random variables such as precipitation intensity and duration, which can not be independent with each other. The problem of defining a probability model for observations of several dependent variables is greatly simplified by the joint distribution in terms of their marginal by taking copulas. This document presents a general framework set frequency analysis bivariate and multivariate using Archimedean copulas for extreme events of hydroclimatological nature such as severe storms. This analysis was conducted in the lower Tunjuelo River basin in Colombia for precipitation events. The results obtained show that for a joint study of the intensity-duration-frequency, IDF curves can be obtained through copulas and thus establish more accurate and reliable information from design storms and associated risks. It shows how the use of copulas greatly simplifies the study of multivariate distributions that introduce the concept of joint return period used to represent the needs of hydrological designs properly in frequency analysis.

  6. Evaluation of joint probability density function models for turbulent nonpremixed combustion with complex chemistry

    NASA Technical Reports Server (NTRS)

    Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.

    1996-01-01

    Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.

  7. Role of beach morphology in wave overtopping hazard assessment

    NASA Astrophysics Data System (ADS)

    Phillips, Benjamin; Brown, Jennifer; Bidlot, Jean-Raymond; Plater, Andrew

    2017-04-01

    Understanding the role of beach morphology in controlling wave overtopping volume will further minimise uncertainties in flood risk assessments at coastal locations defended by engineered structures worldwide. XBeach is used to model wave overtopping volume for a 1:200 yr joint probability distribution of waves and water levels with measured, pre- and post-storm beach profiles. The simulation with measured bathymetry is repeated with and without morphological evolution enabled during the modelled storm event. This research assesses the role of morphology in controlling wave overtopping volumes for hazardous events that meet the typical design level of coastal defence structures. Results show disabling storm-driven morphology under-represents modelled wave overtopping volumes by up to 39% under high Hs conditions, and has a greater impact on the wave overtopping rate than the variability applied within the boundary conditions due to the range of wave-water level combinations that meet the 1:200 yr joint probability criterion. Accounting for morphology in flood modelling is therefore critical for accurately predicting wave overtopping volumes and the resulting flood hazard and to assess economic losses.

  8. Slant path rain attenuation and path diversity statistics obtained through radar modeling of rain structure

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1984-01-01

    Single and joint terminal slant path attenuation statistics at frequencies of 28.56 and 19.04 GHz have been derived, employing a radar data base obtained over a three-year period at Wallops Island, VA. Statistics were independently obtained for path elevation angles of 20, 45, and 90 deg for purposes of examining how elevation angles influences both single-terminal and joint probability distributions. Both diversity gains and autocorrelation function dependence on site spacing and elevation angles were determined employing the radar modeling results. Comparisons with other investigators are presented. An independent path elevation angle prediction technique was developed and demonstrated to fit well with the radar-derived single and joint terminal radar-derived cumulative fade distributions at various elevation angles.

  9. Encounter risk analysis of rainfall and reference crop evapotranspiration in the irrigation district

    NASA Astrophysics Data System (ADS)

    Zhang, Jinping; Lin, Xiaomin; Zhao, Yong; Hong, Yang

    2017-09-01

    Rainfall and reference crop evapotranspiration are random but mutually affected variables in the irrigation district, and their encounter situation can determine water shortage risks under the contexts of natural water supply and demand. However, in reality, the rainfall and reference crop evapotranspiration may have different marginal distributions and their relations are nonlinear. In this study, based on the annual rainfall and reference crop evapotranspiration data series from 1970 to 2013 in the Luhun irrigation district of China, the joint probability distribution of rainfall and reference crop evapotranspiration are developed with the Frank copula function. Using the joint probability distribution, the synchronous-asynchronous encounter risk, conditional joint probability, and conditional return period of different combinations of rainfall and reference crop evapotranspiration are analyzed. The results show that the copula-based joint probability distributions of rainfall and reference crop evapotranspiration are reasonable. The asynchronous encounter probability of rainfall and reference crop evapotranspiration is greater than their synchronous encounter probability, and the water shortage risk associated with meteorological drought (i.e. rainfall variability) is more prone to appear. Compared with other states, there are higher conditional joint probability and lower conditional return period in either low rainfall or high reference crop evapotranspiration. For a specifically high reference crop evapotranspiration with a certain frequency, the encounter risk of low rainfall and high reference crop evapotranspiration is increased with the decrease in frequency. For a specifically low rainfall with a certain frequency, the encounter risk of low rainfall and high reference crop evapotranspiration is decreased with the decrease in frequency. When either the high reference crop evapotranspiration exceeds a certain frequency or low rainfall does not exceed a certain frequency, the higher conditional joint probability and lower conditional return period of various combinations likely cause a water shortage, but the water shortage is not severe.

  10. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses

    PubMed Central

    Wong, Tony E.; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections. PMID:29287095

  11. Optimized Vertex Method and Hybrid Reliability

    NASA Technical Reports Server (NTRS)

    Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.

    2002-01-01

    A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.

  12. Modeling of turbulent supersonic H2-air combustion with an improved joint beta PDF

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Hassan, H. A.

    1991-01-01

    Attempts at modeling recent experiments of Cheng et al. indicated that discrepancies between theory and experiment can be a result of the form of assumed probability density function (PDF) and/or the turbulence model employed. Improvements in both the form of the assumed PDF and the turbulence model are presented. The results are again used to compare with measurements. Initial comparisons are encouraging.

  13. Simultaneous dense coding affected by fluctuating massless scalar field

    NASA Astrophysics Data System (ADS)

    Huang, Zhiming; Ye, Yiyong; Luo, Darong

    2018-04-01

    In this paper, we investigate the simultaneous dense coding (SDC) protocol affected by fluctuating massless scalar field. The noisy model of SDC protocol is constructed and the master equation that governs the SDC evolution is deduced. The success probabilities of SDC protocol are discussed for different locking operators under the influence of vacuum fluctuations. We find that the joint success probability is independent of the locking operators, but other success probabilities are not. For quantum Fourier transform and double controlled-NOT operators, the success probabilities drop with increasing two-atom distance, but SWAP operator is not. Unlike the SWAP operator, the success probabilities of Bob and Charlie are different. For different noisy interval values, different locking operators have different robustness to noise.

  14. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  15. RADC Multi-Dimensional Signal-Processing Research Program.

    DTIC Science & Technology

    1980-09-30

    Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image

  16. TIMS/ORSA (Operations Research Society of America) Special Interest Conference on Applied Probability; Statistical and Computational Problems in Probability Modeling (3rd) Held at Williamsburg, Virginia on 7-9 January 1985.

    DTIC Science & Technology

    1985-01-01

    Marchetti (**) (*) National Institutes of Health. Bethesda, MD 20205 (**) Dipartimento di Matematica . Universit& " La Sapienza". 1-00184 Roma, Italy. ABSTRACT...CANCER MODEL’ DIPARTIMENTO DI MATEMATICA (JOINT WITH DAVIS COVELL) UNIVERSITA LA SAPIENZA 1-00184 ROMAP ITALY SESSION NUMBER W91 PROFESSOR JOHN M...8217i ~ eL : .- "-: (1) Marcel F. Neuts, "Profile Curves of Queues" (2) I. V. Basawa, "Statistical Forecasting for Stochastic Processes" (3) George S

  17. Real-time individual predictions of prostate cancer recurrence using joint models

    PubMed Central

    Taylor, Jeremy M. G.; Park, Yongseok; Ankerst, Donna P.; Proust-Lima, Cecile; Williams, Scott; Kestin, Larry; Bae, Kyoungwha; Pickles, Tom; Sandler, Howard

    2012-01-01

    Summary Patients who were previously treated for prostate cancer with radiation therapy are monitored at regular intervals using a laboratory test called Prostate Specific Antigen (PSA). If the value of the PSA test starts to rise, this is an indication that the prostate cancer is more likely to recur, and the patient may wish to initiate new treatments. Such patients could be helped in making medical decisions by an accurate estimate of the probability of recurrence of the cancer in the next few years. In this paper, we describe the methodology for giving the probability of recurrence for a new patient, as implemented on a web-based calculator. The methods use a joint longitudinal survival model. The model is developed on a training dataset of 2,386 patients and tested on a dataset of 846 patients. Bayesian estimation methods are used with one Markov chain Monte Carlo (MCMC) algorithm developed for estimation of the parameters from the training dataset and a second quick MCMC developed for prediction of the risk of recurrence that uses the longitudinal PSA measures from a new patient. PMID:23379600

  18. A Novel Wireless Power Transfer-Based Weighed Clustering Cooperative Spectrum Sensing Method for Cognitive Sensor Networks.

    PubMed

    Liu, Xin

    2015-10-30

    In a cognitive sensor network (CSN), the wastage of sensing time and energy is a challenge to cooperative spectrum sensing, when the number of cooperative cognitive nodes (CNs) becomes very large. In this paper, a novel wireless power transfer (WPT)-based weighed clustering cooperative spectrum sensing model is proposed, which divides all the CNs into several clusters, and then selects the most favorable CNs as the cluster heads and allows the common CNs to transfer the received radio frequency (RF) energy of the primary node (PN) to the cluster heads, in order to supply the electrical energy needed for sensing and cooperation. A joint resource optimization is formulated to maximize the spectrum access probability of the CSN, through jointly allocating sensing time and clustering number. According to the resource optimization results, a clustering algorithm is proposed. The simulation results have shown that compared to the traditional model, the cluster heads of the proposed model can achieve more transmission power and there exists optimal sensing time and clustering number to maximize the spectrum access probability.

  19. Bayesian informative dropout model for longitudinal binary data with random effects using conditional and joint modeling approaches.

    PubMed

    Chan, Jennifer S K

    2016-05-01

    Dropouts are common in longitudinal study. If the dropout probability depends on the missing observations at or after dropout, this type of dropout is called informative (or nonignorable) dropout (ID). Failure to accommodate such dropout mechanism into the model will bias the parameter estimates. We propose a conditional autoregressive model for longitudinal binary data with an ID model such that the probabilities of positive outcomes as well as the drop-out indicator in each occasion are logit linear in some covariates and outcomes. This model adopting a marginal model for outcomes and a conditional model for dropouts is called a selection model. To allow for the heterogeneity and clustering effects, the outcome model is extended to incorporate mixture and random effects. Lastly, the model is further extended to a novel model that models the outcome and dropout jointly such that their dependency is formulated through an odds ratio function. Parameters are estimated by a Bayesian approach implemented using the user-friendly Bayesian software WinBUGS. A methadone clinic dataset is analyzed to illustrate the proposed models. Result shows that the treatment time effect is still significant but weaker after allowing for an ID process in the data. Finally the effect of drop-out on parameter estimates is evaluated through simulation studies. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Lithostratigraphic interpretation from joint analysis of seismic tomography and magnetotelluric resistivity models using self-organizing map techniques

    NASA Astrophysics Data System (ADS)

    Bauer, K.; Muñoz, G.; Moeck, I.

    2012-12-01

    The combined interpretation of different models as derived from seismic tomography and magnetotelluric (MT) inversion represents a more efficient approach to determine the lithology of the subsurface compared with the separate treatment of each discipline. Such models can be developed independently or by application of joint inversion strategies. After the step of model generation using different geophysical methodologies, a joint interpretation work flow includes the following steps: (1) adjustment of a joint earth model based on the adapted, identical model geometry for the different methods, (2) classification of the model components (e.g. model blocks described by a set of geophysical parameters), and (3) re-mapping of the classified rock types to visualise their distribution within the earth model, and petrophysical characterization and interpretation. One possible approach for the classification of multi-parameter models is based on statistical pattern recognition, where different models are combined and translated into probability density functions. Classes of rock types are identified in these methods as isolated clusters with high probability density function values. Such techniques are well-established for the analysis of two-parameter models. Alternatively we apply self-organizing map (SOM) techniques, which have no limitations in the number of parameters to be analysed in the joint interpretation. Our SOM work flow includes (1) generation of a joint earth model described by so-called data vectors, (2) unsupervised learning or training, (3) analysis of the feature map by adopting image processing techniques, and (4) application of the knowledge to derive a lithological model which is based on the different geophysical parameters. We show the usage of the SOM work flow for a synthetic and a real data case study. Both tests rely on three geophysical properties: P velocity and vertical velocity gradient from seismic tomography, and electrical resistivity from MT inversion. The synthetic data are used as a benchmark test to demonstrate the performance of the SOM method. The real data were collected along a 40 km profile across parts of the NE German basin. The lithostratigraphic model from the joint SOM interpretation consists of eight litho-types and covers Cenozoic, Mesozoic and Paleozoic sediments down to 5 km depth. There is a remarkable agreement between the SOM based model and regional marker horizons interpolated from surrounding 2D industrial seismic data. The most interesting results include (1) distinct properties of the Jurassic (low P velocity gradients, low resistivities) interpreted as the signature of shaly clastics, and (2) a pattern within the Upper Permian Zechstein with decreased resistivities and increased P velocities within the salt depressions on the one hand, and increased resistivities and decreased P velocities in the salt pillows on the other hand. In our interpretation this pattern is related with flow of less dense salt matrix components into the pillows and remaining brittle evaporites within the depressions.

  1. A Bayesian Joint Model of Menstrual Cycle Length and Fecundity

    PubMed Central

    Lum, Kirsten J.; Sundaram, Rajeshwari; Louis, Germaine M. Buck; Louis, Thomas A.

    2015-01-01

    Summary Menstrual cycle length (MCL) has been shown to play an important role in couple fecundity, which is the biologic capacity for reproduction irrespective of pregnancy intentions. However, a comprehensive assessment of its role requires a fecundity model that accounts for male and female attributes and the couple’s intercourse pattern relative to the ovulation day. To this end, we employ a Bayesian joint model for MCL and pregnancy. MCLs follow a scale multiplied (accelerated) mixture model with Gaussian and Gumbel components; the pregnancy model includes MCL as a covariate and computes the cycle-specific probability of pregnancy in a menstrual cycle conditional on the pattern of intercourse and no previous fertilization. Day-specific fertilization probability is modeled using natural, cubic splines. We analyze data from the Longitudinal Investigation of Fertility and the Environment Study (the LIFE Study), a couple based prospective pregnancy study, and find a statistically significant quadratic relation between fecundity and menstrual cycle length, after adjustment for intercourse pattern and other attributes, including male semen quality, both partner’s age, and active smoking status (determined by baseline cotinine level 100ng/mL). We compare results to those produced by a more basic model and show the advantages of a more comprehensive approach. PMID:26295923

  2. Two tandem queues with general renewal input. 2: Asymptotic expansions for the diffusion model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knessl, C.; Tier, C.

    1999-10-01

    In Part 1 the authors formulated and solved a diffusion model for two tandem queues with exponential servers and general renewal arrivals. They thus obtained the easy traffic diffusion approximation to the steady state joint queue length distribution for this network. Here they study asymptotic and numerical properties of the diffusion approximation. In particular, analytical expressions are obtained for the tail probabilities. Both the joint distribution of the two queues and the marginal distribution of the second queue are considered. They also give numerical illustrations of how this marginal is affected by changes in the arrival and service processes.

  3. Analysis of capture-recapture models with individual covariates using data augmentation

    USGS Publications Warehouse

    Royle, J. Andrew

    2009-01-01

    I consider the analysis of capture-recapture models with individual covariates that influence detection probability. Bayesian analysis of the joint likelihood is carried out using a flexible data augmentation scheme that facilitates analysis by Markov chain Monte Carlo methods, and a simple and straightforward implementation in freely available software. This approach is applied to a study of meadow voles (Microtus pennsylvanicus) in which auxiliary data on a continuous covariate (body mass) are recorded, and it is thought that detection probability is related to body mass. In a second example, the model is applied to an aerial waterfowl survey in which a double-observer protocol is used. The fundamental unit of observation is the cluster of individual birds, and the size of the cluster (a discrete covariate) is used as a covariate on detection probability.

  4. Estimating demographic parameters using a combination of known-fate and open N-mixture models

    USGS Publications Warehouse

    Schmidt, Joshua H.; Johnson, Devin S.; Lindberg, Mark S.; Adams, Layne G.

    2015-01-01

    Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark–resight data sets. We provide implementations in both the BUGS language and an R package.

  5. Estimating demographic parameters using a combination of known-fate and open N-mixture models.

    PubMed

    Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G

    2015-10-01

    Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.

  6. A probabilistic multi-criteria decision making technique for conceptual and preliminary aerospace systems design

    NASA Astrophysics Data System (ADS)

    Bandte, Oliver

    It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.

  7. Joint Inversion of 1-D Magnetotelluric and Surface-Wave Dispersion Data with an Improved Multi-Objective Genetic Algorithm and Application to the Data of the Longmenshan Fault Zone

    NASA Astrophysics Data System (ADS)

    Wu, Pingping; Tan, Handong; Peng, Miao; Ma, Huan; Wang, Mao

    2018-05-01

    Magnetotellurics and seismic surface waves are two prominent geophysical methods for deep underground exploration. Joint inversion of these two datasets can help enhance the accuracy of inversion. In this paper, we describe a method for developing an improved multi-objective genetic algorithm (NSGA-SBX) and applying it to two numerical tests to verify the advantages of the algorithm. Our findings show that joint inversion with the NSGA-SBX method can improve the inversion results by strengthening structural coupling when the discontinuities of the electrical and velocity models are consistent, and in case of inconsistent discontinuities between these models, joint inversion can retain the advantages of individual inversions. By applying the algorithm to four detection points along the Longmenshan fault zone, we observe several features. The Sichuan Basin demonstrates low S-wave velocity and high conductivity in the shallow crust probably due to thick sedimentary layers. The eastern margin of the Tibetan Plateau shows high velocity and high resistivity in the shallow crust, while two low velocity layers and a high conductivity layer are observed in the middle lower crust, probably indicating the mid-crustal channel flow. Along the Longmenshan fault zone, a high conductivity layer from 8 to 20 km is observed beneath the northern segment and decreases with depth beneath the middle segment, which might be caused by the elevated fluid content of the fault zone.

  8. The Failure Models of Lead Free Sn-3.0Ag-0.5Cu Solder Joint Reliability Under Low-G and High-G Drop Impact

    NASA Astrophysics Data System (ADS)

    Gu, Jian; Lei, YongPing; Lin, Jian; Fu, HanGuang; Wu, Zhongwei

    2017-02-01

    The reliability of Sn-3.0Ag-0.5Cu (SAC 305) solder joint under a broad level of drop impacts was studied. The failure performance of solder joint, failure probability and failure position were analyzed under two shock test conditions, i.e., 1000 g for 1 ms and 300 g for 2 ms. The stress distribution on the solder joint was calculated by ABAQUS. The results revealed that the dominant reason was the tension due to the difference in stiffness between the print circuit board and ball grid array, and the maximum tension of 121.1 MPa and 31.1 MPa, respectively, under both 1000 g or 300 g drop impact, was focused on the corner of the solder joint which was located in the outmost corner of the solder ball row. The failure modes were summarized into the following four modes: initiation and propagation through the (1) intermetallic compound layer, (2) Ni layer, (3) Cu pad, or (4) Sn-matrix. The outmost corner of the solder ball row had a high failure probability under both 1000 g and 300 g drop impact. The number of failures of solder ball under the 300 g drop impact was higher than that under the 1000 g drop impact. The characteristic drop values for failure were 41 and 15,199, respectively, following the statistics.

  9. Optimal Information Processing in Biochemical Networks

    NASA Astrophysics Data System (ADS)

    Wiggins, Chris

    2012-02-01

    A variety of experimental results over the past decades provide examples of near-optimal information processing in biological networks, including in biochemical and transcriptional regulatory networks. Computing information-theoretic quantities requires first choosing or computing the joint probability distribution describing multiple nodes in such a network --- for example, representing the probability distribution of finding an integer copy number of each of two interacting reactants or gene products while respecting the `intrinsic' small copy number noise constraining information transmission at the scale of the cell. I'll given an overview of some recent analytic and numerical work facilitating calculation of such joint distributions and the associated information, which in turn makes possible numerical optimization of information flow in models of noisy regulatory and biochemical networks. Illustrating cases include quantification of form-function relations, ideal design of regulatory cascades, and response to oscillatory driving.

  10. A Probabilistic Approach to Predict Thermal Fatigue Life for Ball Grid Array Solder Joints

    NASA Astrophysics Data System (ADS)

    Wei, Helin; Wang, Kuisheng

    2011-11-01

    Numerous studies of the reliability of solder joints have been performed. Most life prediction models are limited to a deterministic approach. However, manufacturing induces uncertainty in the geometry parameters of solder joints, and the environmental temperature varies widely due to end-user diversity, creating uncertainties in the reliability of solder joints. In this study, a methodology for accounting for variation in the lifetime prediction for lead-free solder joints of ball grid array packages (PBGA) is demonstrated. The key aspects of the solder joint parameters and the cyclic temperature range related to reliability are involved. Probabilistic solutions of the inelastic strain range and thermal fatigue life based on the Engelmaier model are developed to determine the probability of solder joint failure. The results indicate that the standard deviation increases significantly when more random variations are involved. Using the probabilistic method, the influence of each variable on the thermal fatigue life is quantified. This information can be used to optimize product design and process validation acceptance criteria. The probabilistic approach creates the opportunity to identify the root causes of failed samples from product fatigue tests and field returns. The method can be applied to better understand how variation affects parameters of interest in an electronic package design with area array interconnections.

  11. Bayesian seismic inversion based on rock-physics prior modeling for the joint estimation of acoustic impedance, porosity and lithofacies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio

    We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less

  12. Bayesian data analysis tools for atomic physics

    NASA Astrophysics Data System (ADS)

    Trassinelli, Martino

    2017-10-01

    We present an introduction to some concepts of Bayesian data analysis in the context of atomic physics. Starting from basic rules of probability, we present the Bayes' theorem and its applications. In particular we discuss about how to calculate simple and joint probability distributions and the Bayesian evidence, a model dependent quantity that allows to assign probabilities to different hypotheses from the analysis of a same data set. To give some practical examples, these methods are applied to two concrete cases. In the first example, the presence or not of a satellite line in an atomic spectrum is investigated. In the second example, we determine the most probable model among a set of possible profiles from the analysis of a statistically poor spectrum. We show also how to calculate the probability distribution of the main spectral component without having to determine uniquely the spectrum modeling. For these two studies, we implement the program Nested_fit to calculate the different probability distributions and other related quantities. Nested_fit is a Fortran90/Python code developed during the last years for analysis of atomic spectra. As indicated by the name, it is based on the nested algorithm, which is presented in details together with the program itself.

  13. Damage prognosis of adhesively-bonded joints in laminated composite structural components of unmanned aerial vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrar, Charles R; Gobbato, Maurizio; Conte, Joel

    2009-01-01

    The extensive use of lightweight advanced composite materials in unmanned aerial vehicles (UAVs) drastically increases the sensitivity to both fatigue- and impact-induced damage of their critical structural components (e.g., wings and tail stabilizers) during service life. The spar-to-skin adhesive joints are considered one of the most fatigue sensitive subcomponents of a lightweight UAV composite wing with damage progressively evolving from the wing root. This paper presents a comprehensive probabilistic methodology for predicting the remaining service life of adhesively-bonded joints in laminated composite structural components of UAVs. Non-destructive evaluation techniques and Bayesian inference are used to (i) assess the current statemore » of damage of the system and, (ii) update the probability distribution of the damage extent at various locations. A probabilistic model for future loads and a mechanics-based damage model are then used to stochastically propagate damage through the joint. Combined local (e.g., exceedance of a critical damage size) and global (e.g.. flutter instability) failure criteria are finally used to compute the probability of component failure at future times. The applicability and the partial validation of the proposed methodology are then briefly discussed by analyzing the debonding propagation, along a pre-defined adhesive interface, in a simply supported laminated composite beam with solid rectangular cross section, subjected to a concentrated load applied at mid-span. A specially developed Eliler-Bernoulli beam finite element with interlaminar slip along the damageable interface is used in combination with a cohesive zone model to study the fatigue-induced degradation in the adhesive material. The preliminary numerical results presented are promising for the future validation of the methodology.« less

  14. Modeling molecular mixing in a spatially inhomogeneous turbulent flow

    NASA Astrophysics Data System (ADS)

    Meyer, Daniel W.; Deb, Rajdeep

    2012-02-01

    Simulations of spatially inhomogeneous turbulent mixing in decaying grid turbulence with a joint velocity-concentration probability density function (PDF) method were conducted. The inert mixing scenario involves three streams with different compositions. The mixing model of Meyer ["A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows," Phys. Fluids 22(3), 035103 (2010)], the interaction by exchange with the mean (IEM) model and its velocity-conditional variant, i.e., the IECM model, were applied. For reference, the direct numerical simulation data provided by Sawford and de Bruyn Kops ["Direct numerical simulation and lagrangian modeling of joint scalar statistics in ternary mixing," Phys. Fluids 20(9), 095106 (2008)] was used. It was found that velocity conditioning is essential to obtain accurate concentration PDF predictions. Moreover, the model of Meyer provides significantly better results compared to the IECM model at comparable computational expense.

  15. Fire and Heat Spreading Model Based on Cellular Automata Theory

    NASA Astrophysics Data System (ADS)

    Samartsev, A. A.; Rezchikov, A. F.; Kushnikov, V. A.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikov, O. V.; Shulga, T. E.; Tverdokhlebov, V. A.; Fominykh, D. S.

    2018-05-01

    The distinctive feature of the proposed fire and heat spreading model in premises is the reduction of the computational complexity due to the use of the theory of cellular automata with probability rules of behavior. The possibilities and prospects of using this model in practice are noted. The proposed model has a simple mechanism of integration with agent-based evacuation models. The joint use of these models could improve floor plans and reduce the time of evacuation from premises during fires.

  16. How weak values emerge in joint measurements on cloned quantum systems.

    PubMed

    Hofmann, Holger F

    2012-07-13

    A statistical analysis of optimal universal cloning shows that it is possible to identify an ideal (but nonpositive) copying process that faithfully maps all properties of the original Hilbert space onto two separate quantum systems, resulting in perfect correlations for all observables. The joint probabilities for noncommuting measurements on separate clones then correspond to the real parts of the complex joint probabilities observed in weak measurements on a single system, where the measurements on the two clones replace the corresponding sequence of weak measurement and postselection. The imaginary parts of weak measurement statics can be obtained by replacing the cloning process with a partial swap operation. A controlled-swap operation combines both processes, making the complete weak measurement statistics accessible as a well-defined contribution to the joint probabilities of fully resolved projective measurements on the two output systems.

  17. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  18. Stochastic methods for analysis of power flow in electric networks

    NASA Astrophysics Data System (ADS)

    1982-09-01

    The modeling and effects of probabilistic behavior on steady state power system operation were analyzed. A solution to the steady state network flow equations which adhere both to Kirchoff's Laws and probabilistic laws, using either combinatorial or functional approximation techniques was obtained. The development of sound techniques for producing meaningful data to serve as input is examined. Electric demand modeling, equipment failure analysis, and algorithm development are investigated. Two major development areas are described: a decomposition of stochastic processes which gives stationarity, ergodicity, and even normality; and a powerful surrogate probability approach using proportions of time which allows the calculation of joint events from one dimensional probability spaces.

  19. By-product mutualism and the ambiguous effects of harsher environments - A game-theoretic model.

    PubMed

    De Jaegher, Kris; Hoyer, Britta

    2016-03-21

    We construct two-player two-strategy game-theoretic models of by-product mutualism, where our focus lies on the way in which the probability of cooperation among players is affected by the degree of adversity facing the players. In our first model, cooperation consists of the production of a public good, and adversity is linked to the degree of complementarity of the players׳ efforts in producing the public good. In our second model, cooperation consists of the defense of a public, and/or a private good with by-product benefits, and adversity is measured by the number of random attacks (e.g., by a predator) facing the players. In both of these models, our analysis confirms the existence of the so-called boomerang effect, which states that in a harsh environment, the individual player has few incentives to unilaterally defect in a situation of joint cooperation. Focusing on such an effect in isolation leads to the "common-enemy" hypothesis that a larger degree of adversity increases the probability of cooperation. Yet, we also find that a sucker effect may simultaneously exist, which says that in a harsh environment, the individual player has few incentives to unilaterally cooperate in a situation of joint defection. Looked at in isolation, the sucker effect leads to the competing hypothesis that a larger degree of adversity decreases the probability of cooperation. Our analysis predicts circumstances in which the "common enemy" hypothesis prevails, and circumstances in which the competing hypothesis prevails. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Security Inference from Noisy Data

    DTIC Science & Technology

    2008-04-08

    and RFID chips, introduce new ways of communication and sharing data. For ex- ample, the Nike +iPod Sport Kit is a new wireless accessory for the iPod...Agrawal show: • A wide variety (e.g. different keyboards of the same model, different models, different brands ) of keyboards have keys with distinct...grammar level and spelling level in this case) are built into a single model. Algorithms to maximize global joint probability may improve the

  1. A concise evidence-based physical examination for diagnosis of acromioclavicular joint pathology: a systematic review.

    PubMed

    Krill, Michael K; Rosas, Samuel; Kwon, KiHyun; Dakkak, Andrew; Nwachukwu, Benedict U; McCormick, Frank

    2018-02-01

    The clinical examination of the shoulder joint is an undervalued diagnostic tool for evaluating acromioclavicular (AC) joint pathology. Applying evidence-based clinical tests enables providers to make an accurate diagnosis and minimize costly imaging procedures and potential delays in care. The purpose of this study was to create a decision tree analysis enabling simple and accurate diagnosis of AC joint pathology. A systematic review of the Medline, Ovid and Cochrane Review databases was performed to identify level one and two diagnostic studies evaluating clinical tests for AC joint pathology. Individual test characteristics were combined in series and in parallel to improve sensitivities and specificities. A secondary analysis utilized subjective pre-test probabilities to create a clinical decision tree algorithm with post-test probabilities. The optimal special test combination to screen and confirm AC joint pathology combined Paxinos sign and O'Brien's Test, with a specificity of 95.8% when performed in series; whereas, Paxinos sign and Hawkins-Kennedy Test demonstrated a sensitivity of 93.7% when performed in parallel. Paxinos sign and O'Brien's Test demonstrated the greatest positive likelihood ratio (2.71); whereas, Paxinos sign and Hawkins-Kennedy Test reported the lowest negative likelihood ratio (0.35). No combination of special tests performed in series or in parallel creates more than a small impact on post-test probabilities to screen or confirm AC joint pathology. Paxinos sign and O'Brien's Test is the only special test combination that has a small and sometimes important impact when used both in series and in parallel. Physical examination testing is not beneficial for diagnosis of AC joint pathology when pretest probability is unequivocal. In these instances, it is of benefit to proceed with procedural tests to evaluate AC joint pathology. Ultrasound-guided corticosteroid injections are diagnostic and therapeutic. An ultrasound-guided AC joint corticosteroid injection may be an appropriate new standard for treatment and surgical decision-making. II - Systematic Review.

  2. Strength Modeling Report

    NASA Technical Reports Server (NTRS)

    Badler, N. I.; Lee, P.; Wong, S.

    1985-01-01

    Strength modeling is a complex and multi-dimensional issue. There are numerous parameters to the problem of characterizing human strength, most notably: (1) position and orientation of body joints; (2) isometric versus dynamic strength; (3) effector force versus joint torque; (4) instantaneous versus steady force; (5) active force versus reactive force; (6) presence or absence of gravity; (7) body somatotype and composition; (8) body (segment) masses; (9) muscle group envolvement; (10) muscle size; (11) fatigue; and (12) practice (training) or familiarity. In surveying the available literature on strength measurement and modeling an attempt was made to examine as many of these parameters as possible. The conclusions reached at this point toward the feasibility of implementing computationally reasonable human strength models. The assessment of accuracy of any model against a specific individual, however, will probably not be possible on any realistic scale. Taken statistically, strength modeling may be an effective tool for general questions of task feasibility and strength requirements.

  3. Probability density function modeling of scalar mixing from concentrated sources in turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Franzese, P.; Boybeyi, Z.

    2007-11-01

    Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.

  4. Inferring species interactions through joint mark–recapture analysis

    USGS Publications Warehouse

    Yackulic, Charles B.; Korman, Josh; Yard, Michael D.; Dzul, Maria C.

    2018-01-01

    Introduced species are frequently implicated in declines of native species. In many cases, however, evidence linking introduced species to native declines is weak. Failure to make strong inferences regarding the role of introduced species can hamper attempts to predict population viability and delay effective management responses. For many species, mark–recapture analysis is the more rigorous form of demographic analysis. However, to our knowledge, there are no mark–recapture models that allow for joint modeling of interacting species. Here, we introduce a two‐species mark–recapture population model in which the vital rates (and capture probabilities) of one species are allowed to vary in response to the abundance of the other species. We use a simulation study to explore bias and choose an approach to model selection. We then use the model to investigate species interactions between endangered humpback chub (Gila cypha) and introduced rainbow trout (Oncorhynchus mykiss) in the Colorado River between 2009 and 2016. In particular, we test hypotheses about how two environmental factors (turbidity and temperature), intraspecific density dependence, and rainbow trout abundance are related to survival, growth, and capture of juvenile humpback chub. We also project the long‐term effects of different rainbow trout abundances on adult humpback chub abundances. Our simulation study suggests this approach has minimal bias under potentially challenging circumstances (i.e., low capture probabilities) that characterized our application and that model selection using indicator variables could reliably identify the true generating model even when process error was high. When the model was applied to rainbow trout and humpback chub, we identified negative relationships between rainbow trout abundance and the survival, growth, and capture probability of juvenile humpback chub. Effects on interspecific interactions on survival and capture probability were strongly supported, whereas support for the growth effect was weaker. Environmental factors were also identified to be important and in many cases stronger than interspecific interactions, and there was still substantial unexplained variation in growth and survival rates. The general approach presented here for combining mark–recapture data for two species is applicable in many other systems and could be modified to model abundance of the invader via other modeling approaches.

  5. Modeling the Dependency Structure of Integrated Intensity Processes

    PubMed Central

    Ma, Yong-Ki

    2015-01-01

    This paper studies an important issue of dependence structure. To model this structure, the intensities within the Cox processes are driven by dependent shot noise processes, where jumps occur simultaneously and their sizes are correlated. The joint survival probability of the integrated intensities is explicitly obtained from the copula with exponential marginal distributions. Subsequently, this result can provide a very useful guide for credit risk management. PMID:26270638

  6. North Atlantic Coast Comprehensive Study Phase I: Statistical Analysis of Historical Extreme Water Levels with Sea Level Change

    DTIC Science & Technology

    2014-09-01

    14-7 ii Abstract The U.S. North Atlantic coast is subject to coastal flooding as a result of both severe extratropical storms (e.g., Nor’easters...Products and Services, excluding any kind of high-resolution hydrodynamic modeling. Tropical and extratropical storms were treated as a single...joint probability analysis and high-fidelity modeling of tropical and extratropical storms

  7. Combined risk assessment of nonstationary monthly water quality based on Markov chain and time-varying copula.

    PubMed

    Shi, Wei; Xia, Jun

    2017-02-01

    Water quality risk management is a global hot research linkage with the sustainable water resource development. Ammonium nitrogen (NH 3 -N) and permanganate index (COD Mn ) as the focus indicators in Huai River Basin, are selected to reveal their joint transition laws based on Markov theory. The time-varying moments model with either time or land cover index as explanatory variables is applied to build the time-varying marginal distributions of water quality time series. Time-varying copula model, which takes the non-stationarity in the marginal distribution and/or the time variation in dependence structure between water quality series into consideration, is constructed to describe a bivariate frequency analysis for NH 3 -N and COD Mn series at the same monitoring gauge. The larger first-order Markov joint transition probability indicates water quality state Class V w , Class IV and Class III will occur easily in the water body of Bengbu Sluice. Both marginal distribution and copula models are nonstationary, and the explanatory variable time yields better performance than land cover index in describing the non-stationarities in the marginal distributions. In modelling the dependence structure changes, time-varying copula has a better fitting performance than the copula with the constant or the time-trend dependence parameter. The largest synchronous encounter risk probability of NH 3 -N and COD Mn simultaneously reaching Class V is 50.61%, while the asynchronous encounter risk probability is largest when NH 3 -N and COD Mn is inferior to class V and class IV water quality standards, respectively.

  8. Spacing distribution functions for 1D point island model with irreversible attachment

    NASA Astrophysics Data System (ADS)

    Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto

    2011-03-01

    We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).

  9. Multivariate Bayesian modeling of known and unknown causes of events--an application to biosurveillance.

    PubMed

    Shen, Yanna; Cooper, Gregory F

    2012-09-01

    This paper investigates Bayesian modeling of known and unknown causes of events in the context of disease-outbreak detection. We introduce a multivariate Bayesian approach that models multiple evidential features of every person in the population. This approach models and detects (1) known diseases (e.g., influenza and anthrax) by using informative prior probabilities and (2) unknown diseases (e.g., a new, highly contagious respiratory virus that has never been seen before) by using relatively non-informative prior probabilities. We report the results of simulation experiments which support that this modeling method can improve the detection of new disease outbreaks in a population. A contribution of this paper is that it introduces a multivariate Bayesian approach for jointly modeling both known and unknown causes of events. Such modeling has general applicability in domains where the space of known causes is incomplete. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  10. Uniform California earthquake rupture forecast, version 2 (UCERF 2)

    USGS Publications Warehouse

    Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.

    2009-01-01

    The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.

  11. Applying the Hájek Approach in Formula-Based Variance Estimation. Research Report. ETS RR-17-24

    ERIC Educational Resources Information Center

    Qian, Jiahe

    2017-01-01

    The variance formula derived for a two-stage sampling design without replacement employs the joint inclusion probabilities in the first-stage selection of clusters. One of the difficulties encountered in data analysis is the lack of information about such joint inclusion probabilities. One way to solve this issue is by applying Hájek's…

  12. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  13. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  14. Joint Probability Models of Radiology Images and Clinical Annotations

    ERIC Educational Resources Information Center

    Arnold, Corey Wells

    2009-01-01

    Radiology data, in the form of images and reports, is growing at a high rate due to the introduction of new imaging modalities, new uses of existing modalities, and the growing importance of objective image information in the diagnosis and treatment of patients. This increase has resulted in an enormous set of image data that is richly annotated…

  15. An MDI (Minimum Discrimination Information) Model and an Algorithm for Composite Hypotheses Testing and Estimation in Marketing. Revision 2.

    DTIC Science & Technology

    1982-09-01

    considered to be Markovian and the fact that Ehrenberg has been openly critical of the use of first-order Markov processes in describing consumer ... behavior -/ disinclines us to treating these data in this manner. We Shall therefore interpret the p (i,i) as joint rather than conditional probabilities

  16. Farm Management, Environment, and Weather Factors Jointly Affect the Probability of Spinach Contamination by Generic Escherichia coli at the Preharvest Stage

    PubMed Central

    Navratil, Sarah; Gregory, Ashley; Bauer, Arin; Srinath, Indumathi; Szonyi, Barbara; Nightingale, Kendra; Anciso, Juan; Jun, Mikyoung; Han, Daikwon; Lawhon, Sara; Ivanek, Renata

    2014-01-01

    The National Resources Information (NRI) databases provide underutilized information on the local farm conditions that may predict microbial contamination of leafy greens at preharvest. Our objective was to identify NRI weather and landscape factors affecting spinach contamination with generic Escherichia coli individually and jointly with farm management and environmental factors. For each of the 955 georeferenced spinach samples (including 63 positive samples) collected between 2010 and 2012 on 12 farms in Colorado and Texas, we extracted variables describing the local weather (ambient temperature, precipitation, and wind speed) and landscape (soil characteristics and proximity to roads and water bodies) from NRI databases. Variables describing farm management and environment were obtained from a survey of the enrolled farms. The variables were evaluated using a mixed-effect logistic regression model with random effects for farm and date. The model identified precipitation as a single NRI predictor of spinach contamination with generic E. coli, indicating that the contamination probability increases with an increasing mean amount of rain (mm) in the past 29 days (odds ratio [OR] = 3.5). The model also identified the farm's hygiene practices as a protective factor (OR = 0.06) and manure application (OR = 52.2) and state (OR = 108.1) as risk factors. In cross-validation, the model showed a solid predictive performance, with an area under the receiver operating characteristic (ROC) curve of 81%. Overall, the findings highlighted the utility of NRI precipitation data in predicting contamination and demonstrated that farm management, environment, and weather factors should be considered jointly in development of good agricultural practices and measures to reduce produce contamination. PMID:24509926

  17. Simultaneous modeling of habitat suitability, occupancy, and relative abundance: African elephants in Zimbabwe

    USGS Publications Warehouse

    Martin, Julien; Chamaille-Jammes, Simon; Nichols, James D.; Fritz, Herve; Hines, James E.; Fonnesbeck, Christopher J.; MacKenzie, Darryl I.; Bailey, Larissa L.

    2010-01-01

    The recent development of statistical models such as dynamic site occupancy models provides the opportunity to address fairly complex management and conservation problems with relatively simple models. However, surprisingly few empirical studies have simultaneously modeled habitat suitability and occupancy status of organisms over large landscapes for management purposes. Joint modeling of these components is particularly important in the context of management of wild populations, as it provides a more coherent framework to investigate the population dynamics of organisms in space and time for the application of management decision tools. We applied such an approach to the study of water hole use by African elephants in Hwange National Park, Zimbabwe. Here we show how such methodology may be implemented and derive estimates of annual transition probabilities among three dry-season states for water holes: (1) unsuitable state (dry water holes with no elephants); (2) suitable state (water hole with water) with low abundance of elephants; and (3) suitable state with high abundance of elephants. We found that annual rainfall and the number of neighboring water holes influenced the transition probabilities among these three states. Because of an increase in elephant densities in the park during the study period, we also found that transition probabilities from low abundance to high abundance states increased over time. The application of the joint habitat–occupancy models provides a coherent framework to examine how habitat suitability and factors that affect habitat suitability influence the distribution and abundance of organisms. We discuss how these simple models can further be used to apply structured decision-making tools in order to derive decisions that are optimal relative to specified management objectives. The modeling framework presented in this paper should be applicable to a wide range of existing data sets and should help to address important ecological, conservation, and management problems that deal with occupancy, relative abundance, and habitat suitability.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Lin, Guang

    In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less

  19. Inference of reaction rate parameters based on summary statistics from experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  20. Inference of reaction rate parameters based on summary statistics from experiments

    DOE PAGES

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...

    2016-10-15

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  1. A two-component rain model for the prediction of attenuation statistics

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1982-01-01

    A two-component rain model has been developed for calculating attenuation statistics. In contrast to most other attenuation prediction models, the two-component model calculates the occurrence probability for volume cells or debris attenuation events. The model performed significantly better than the International Radio Consultative Committee model when used for predictions on earth-satellite paths. It is expected that the model will have applications in modeling the joint statistics required for space diversity system design, the statistics of interference due to rain scatter at attenuating frequencies, and the duration statistics for attenuation events.

  2. Applications of the Galton Watson process to human DNA evolution and demography

    NASA Astrophysics Data System (ADS)

    Neves, Armando G. M.; Moreira, Carlos H. C.

    2006-08-01

    We show that the problem of existence of a mitochondrial Eve can be understood as an application of the Galton-Watson process and presents interesting analogies with critical phenomena in Statistical Mechanics. In the approximation of small survival probability, and assuming limited progeny, we are able to find for a genealogic tree the maximum and minimum survival probabilities over all probability distributions for the number of children per woman constrained to a given mean. As a consequence, we can relate existence of a mitochondrial Eve to quantitative demographic data of early mankind. In particular, we show that a mitochondrial Eve may exist even in an exponentially growing population, provided that the mean number of children per woman Nbar is constrained to a small range depending on the probability p that a child is a female. Assuming that the value p≈0.488 valid nowadays has remained fixed for thousands of generations, the range where a mitochondrial Eve occurs with sizeable probability is 2.0492

  3. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  4. A framework for conducting mechanistic based reliability assessments of components operating in complex systems

    NASA Astrophysics Data System (ADS)

    Wallace, Jon Michael

    2003-10-01

    Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.

  5. Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas

    PubMed Central

    Bedford, Tim; Daneshkhah, Alireza

    2015-01-01

    Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets. PMID:26332240

  6. Time-varying SMART design and data analysis methods for evaluating adaptive intervention effects.

    PubMed

    Dai, Tianjiao; Shete, Sanjay

    2016-08-30

    In a standard two-stage SMART design, the intermediate response to the first-stage intervention is measured at a fixed time point for all participants. Subsequently, responders and non-responders are re-randomized and the final outcome of interest is measured at the end of the study. To reduce the side effects and costs associated with first-stage interventions in a SMART design, we proposed a novel time-varying SMART design in which individuals are re-randomized to the second-stage interventions as soon as a pre-fixed intermediate response is observed. With this strategy, the duration of the first-stage intervention will vary. We developed a time-varying mixed effects model and a joint model that allows for modeling the outcomes of interest (intermediate and final) and the random durations of the first-stage interventions simultaneously. The joint model borrows strength from the survival sub-model in which the duration of the first-stage intervention (i.e., time to response to the first-stage intervention) is modeled. We performed a simulation study to evaluate the statistical properties of these models. Our simulation results showed that the two modeling approaches were both able to provide good estimations of the means of the final outcomes of all the embedded interventions in a SMART. However, the joint modeling approach was more accurate for estimating the coefficients of first-stage interventions and time of the intervention. We conclude that the joint modeling approach provides more accurate parameter estimates and a higher estimated coverage probability than the single time-varying mixed effects model, and we recommend the joint model for analyzing data generated from time-varying SMART designs. In addition, we showed that the proposed time-varying SMART design is cost-efficient and equally effective in selecting the optimal embedded adaptive intervention as the standard SMART design.

  7. Birth/birth-death processes and their computable transition probabilities with biological applications.

    PubMed

    Ho, Lam Si Tung; Xu, Jason; Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2018-03-01

    Birth-death processes track the size of a univariate population, but many biological systems involve interaction between populations, necessitating models for two or more populations simultaneously. A lack of efficient methods for evaluating finite-time transition probabilities of bivariate processes, however, has restricted statistical inference in these models. Researchers rely on computationally expensive methods such as matrix exponentiation or Monte Carlo approximation, restricting likelihood-based inference to small systems, or indirect methods such as approximate Bayesian computation. In this paper, we introduce the birth/birth-death process, a tractable bivariate extension of the birth-death process, where rates are allowed to be nonlinear. We develop an efficient algorithm to calculate its transition probabilities using a continued fraction representation of their Laplace transforms. Next, we identify several exemplary models arising in molecular epidemiology, macro-parasite evolution, and infectious disease modeling that fall within this class, and demonstrate advantages of our proposed method over existing approaches to inference in these models. Notably, the ubiquitous stochastic susceptible-infectious-removed (SIR) model falls within this class, and we emphasize that computable transition probabilities newly enable direct inference of parameters in the SIR model. We also propose a very fast method for approximating the transition probabilities under the SIR model via a novel branching process simplification, and compare it to the continued fraction representation method with application to the 17th century plague in Eyam. Although the two methods produce similar maximum a posteriori estimates, the branching process approximation fails to capture the correlation structure in the joint posterior distribution.

  8. Positive phase space distributions and uncertainty relations

    NASA Technical Reports Server (NTRS)

    Kruger, Jan

    1993-01-01

    In contrast to a widespread belief, Wigner's theorem allows the construction of true joint probabilities in phase space for distributions describing the object system as well as for distributions depending on the measurement apparatus. The fundamental role of Heisenberg's uncertainty relations in Schroedinger form (including correlations) is pointed out for these two possible interpretations of joint probability distributions. Hence, in order that a multivariate normal probability distribution in phase space may correspond to a Wigner distribution of a pure or a mixed state, it is necessary and sufficient that Heisenberg's uncertainty relation in Schroedinger form should be satisfied.

  9. Naive Probability: Model-Based Estimates of Unique Events.

    PubMed

    Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N

    2015-08-01

    We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. © 2014 Cognitive Science Society, Inc.

  10. Optimal Power Allocation Strategy in a Joint Bistatic Radar and Communication System Based on Low Probability of Intercept

    PubMed Central

    Wang, Fei; Salous, Sana; Zhou, Jianjiang

    2017-01-01

    In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme. PMID:29186850

  11. Optimal Power Allocation Strategy in a Joint Bistatic Radar and Communication System Based on Low Probability of Intercept.

    PubMed

    Shi, Chenguang; Wang, Fei; Salous, Sana; Zhou, Jianjiang

    2017-11-25

    In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme.

  12. Integrated Data Analysis for Fusion: A Bayesian Tutorial for Fusion Diagnosticians

    NASA Astrophysics Data System (ADS)

    Dinklage, Andreas; Dreier, Heiko; Fischer, Rainer; Gori, Silvio; Preuss, Roland; Toussaint, Udo von

    2008-03-01

    Integrated Data Analysis (IDA) offers a unified way of combining information relevant to fusion experiments. Thereby, IDA meets with typical issues arising in fusion data analysis. In IDA, all information is consistently formulated as probability density functions quantifying uncertainties in the analysis within the Bayesian probability theory. For a single diagnostic, IDA allows the identification of faulty measurements and improvements in the setup. For a set of diagnostics, IDA gives joint error distributions allowing the comparison and integration of different diagnostics results. Validation of physics models can be performed by model comparison techniques. Typical data analysis applications benefit from IDA capabilities of nonlinear error propagation, the inclusion of systematic effects and the comparison of different physics models. Applications range from outlier detection, background discrimination, model assessment and design of diagnostics. In order to cope with next step fusion device requirements, appropriate techniques are explored for fast analysis applications.

  13. Probabilistic image modeling with an extended chain graph for human activity recognition and image segmentation.

    PubMed

    Zhang, Lei; Zeng, Zhi; Ji, Qiang

    2011-09-01

    Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.

  14. Mixture Modeling for Background and Sources Separation in x-ray Astronomical Images

    NASA Astrophysics Data System (ADS)

    Guglielmetti, Fabrizia; Fischer, Rainer; Dose, Volker

    2004-11-01

    A probabilistic technique for the joint estimation of background and sources in high-energy astrophysics is described. Bayesian probability theory is applied to gain insight into the coexistence of background and sources through a probabilistic two-component mixture model, which provides consistent uncertainties of background and sources. The present analysis is applied to ROSAT PSPC data (0.1-2.4 keV) in Survey Mode. A background map is modelled using a Thin-Plate spline. Source probability maps are obtained for each pixel (45 arcsec) independently and for larger correlation lengths, revealing faint and extended sources. We will demonstrate that the described probabilistic method allows for detection improvement of faint extended celestial sources compared to the Standard Analysis Software System (SASS) used for the production of the ROSAT All-Sky Survey (RASS) catalogues.

  15. A TWO-STATE MIXED HIDDEN MARKOV MODEL FOR RISKY TEENAGE DRIVING BEHAVIOR

    PubMed Central

    Jackson, John C.; Albert, Paul S.; Zhang, Zhiwei

    2016-01-01

    This paper proposes a joint model for longitudinal binary and count outcomes. We apply the model to a unique longitudinal study of teen driving where risky driving behavior and the occurrence of crashes or near crashes are measured prospectively over the first 18 months of licensure. Of scientific interest is relating the two processes and predicting crash and near crash outcomes. We propose a two-state mixed hidden Markov model whereby the hidden state characterizes the mean for the joint longitudinal crash/near crash outcomes and elevated g-force events which are a proxy for risky driving. Heterogeneity is introduced in both the conditional model for the count outcomes and the hidden process using a shared random effect. An estimation procedure is presented using the forward–backward algorithm along with adaptive Gaussian quadrature to perform numerical integration. The estimation procedure readily yields hidden state probabilities as well as providing for a broad class of predictors. PMID:27766124

  16. A model to explain joint patterns found in ignimbrite deposits

    NASA Astrophysics Data System (ADS)

    Tibaldi, A.; Bonali, F. L.

    2018-03-01

    The study of fracture systems is of paramount importance for economic applications, such as CO2 storage in rock successions, geothermal and hydrocarbon exploration and exploitation, and also for a better knowledge of seismogenic fault formation. Understanding the origin of joints can be useful for tectonic studies and for a geotechnical characterisation of rock masses. Here, we illustrate a joint pattern discovered in ignimbrite deposits of South America, which can be confused with conjugate tectonic joint sets but which have another origin. The pattern is probably common, but recognisable only in plan view and before tectonic deformation obscures and overprints it. Key sites have been mostly studied by field surveys in Bolivia and Chile. The pattern is represented by hundreds-of-meters up to kilometre-long swarms of master joints, which show circular to semi-circular geometries and intersections that have "X" and "Y" patterns. Inside each swarm, joints are systematic, rectilinear or curvilinear in plan view, and as much as 900 m long. In section view, they are from sub-vertical to vertical and do not affect the underlying deposits. Joints with different orientation mostly interrupt each other, suggesting they have the same age. This joint architecture is here interpreted as resulting from differential contraction after emplacement of the ignimbrite deposit above a complex topography. The set of the joint pattern that has suitable orientation with respect to tectonic stresses may act to nucleate faults.

  17. Quantifying Hydroperiod, Fire and Nutrient Effects on the Composition of Plant Communities in Marl Prairie of the Everglades: a Joint Probability Method Based Model

    NASA Astrophysics Data System (ADS)

    Zhai, L.

    2017-12-01

    Plant community can be simultaneously affected by human activities and climate changes, and quantifying and predicting this combined effect on plant community by appropriate model framework which is validated by field data is complex, but very useful to conservation management. Plant communities in the Everglades provide an unique set of conditions to develop and validate this model framework, because they are both experiencing intensive effects of human activities (such as changing hydroperiod by drainage and restoration projects, nutrients from upstream agriculture, prescribed fire, etc.) and climate changes (such as warming, changing precipitation patter, sea level rise, etc.). More importantly, previous research attention focuses on plant communities in slough ecosystem (including ridge, slough and their tree islands), very few studies consider the marl prairie ecosystem. Comparing with slough ecosystem featured by remaining consistently flooded almost year-round, marl prairie has relatively shorter hydroperiod (just in wet-season of one year). Therefore, plant communities of marl prairie may receive more impacts from hydroperiod change. In addition to hydroperiod, fire and nutrients also affect the plant communities in the marl prairie. Therefore, to quantify the combined effects of water level, fire, and nutrients on the composition of the plant communities, we are developing a joint probability method based vegetation dynamic model. Further, the model is being validated by field data about changes of vegetation assemblage along environmental gradients in the marl prairie. Our poster showed preliminary data from our current project.

  18. Generation of multivariate near shore extreme wave conditions based on an extreme value copula for offshore boundary conditions.

    NASA Astrophysics Data System (ADS)

    Leyssen, Gert; Mercelis, Peter; De Schoesitter, Philippe; Blanckaert, Joris

    2013-04-01

    Near shore extreme wave conditions, used as input for numerical wave agitation simulations and for the dimensioning of coastal defense structures, need to be determined at a harbour entrance situated at the French North Sea coast. To obtain significant wave heights, the numerical wave model SWAN has been used. A multivariate approach was used to account for the joint probabilities. Considered variables are: wind velocity and direction, water level and significant offshore wave height and wave period. In a first step a univariate extreme value distribution has been determined for the main variables. By means of a technique based on the mean excess function, an appropriate member of the GPD is selected. An optimal threshold for peak over threshold selection is determined by maximum likelihood optimization. Next, the joint dependency structure for the primary random variables is modeled by an extreme value copula. Eventually the multivariate domain of variables was stratified in different classes, each of which representing a combination of variable quantiles with a joint probability, which are used for model simulation. The main variable is the wind velocity, as in the area of concern extreme wave conditions are wind driven. The analysis is repeated for 9 different wind directions. The secondary variable is water level. In shallow waters extreme waves will be directly affected by water depth. Hence the joint probability of occurrence for water level and wave height is of major importance for design of coastal defense structures. Wind velocity and water levels are only dependent for some wind directions (wind induced setup). Dependent directions are detected using a Kendall and Spearman test and appeared to be those with the longest fetch. For these directions, wind velocity and water level extreme value distributions are multivariately linked through a Gumbel Copula. These distributions are stratified into classes of which the frequency of occurrence can be calculated. For the remaining directions the univariate extreme wind velocity distribution is stratified, each class combined with 5 high water levels. The wave height at the model boundaries was taken into account by a regression with the extreme wind velocity at the offshore location. The regression line and the 95% confidence limits where combined with each class. Eventually the wave period is computed by a new regression with the significant wave height. This way 1103 synthetic events were selected and simulated with the SWAN wave model, each of which a frequency of occurrence is calculated for. Hence near shore significant wave heights are obtained with corresponding frequencies. The statistical distribution of the near shore wave heights is determined by sorting the model results in a descending order and accumulating the corresponding frequencies. This approach allows determination of conditional return periods. For example, for the imposed univariate design return periods of 100 years for significant wave height and 30 years for water level, the joint return period for a simultaneous exceedance of both conditions can be computed as 4000 years. Hence, this methodology allows for a probabilistic design of coastal defense structures.

  19. Observation of non-classical correlations in sequential measurements of photon polarization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.

    2016-10-01

    A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.

  20. Two-locus maximum lod score analysis of a multifactorial trait: joint consideration of IDDM2 and IDDM4 with IDDM1 in type 1 diabetes.

    PubMed

    Cordell, H J; Todd, J A; Bennett, S T; Kawaguchi, Y; Farrall, M

    1995-10-01

    To investigate the genetic component of multifactorial diseases such as type 1 (insulin-dependent) diabetes mellitus (IDDM), models involving the joint action of several disease loci are important. These models can give increased power to detect an effect and a greater understanding of etiological mechanisms. Here, we present an extension of the maximum lod score method of N. Risch, which allows the simultaneous detection and modeling of two unlinked disease loci. Genetic constraints on the identical-by-descent sharing probabilities, analogous to the "triangle" restrictions in the single-locus method, are derived, and the size and power of the test statistics are investigated. The method is applied to affected-sib-pair data, and the joint effects of IDDM1 (HLA) and IDDM2 (the INS VNTR) and of IDDM1 and IDDM4 (FGF3-linked) are assessed with relation to the development of IDDM. In the presence of genetic heterogeneity, there is seen to be a significant advantage in analyzing more than one locus simultaneously. Analysis of these families indicates that the effects at IDDM1 and IDDM2 are well described by a multiplicative genetic model, while those at IDDM1 and IDDM4 follow a heterogeneity model.

  1. Two-locus maximum lod score analysis of a multifactorial trait: joint consideration of IDDM2 and IDDM4 with IDDM1 in type 1 diabetes.

    PubMed Central

    Cordell, H J; Todd, J A; Bennett, S T; Kawaguchi, Y; Farrall, M

    1995-01-01

    To investigate the genetic component of multifactorial diseases such as type 1 (insulin-dependent) diabetes mellitus (IDDM), models involving the joint action of several disease loci are important. These models can give increased power to detect an effect and a greater understanding of etiological mechanisms. Here, we present an extension of the maximum lod score method of N. Risch, which allows the simultaneous detection and modeling of two unlinked disease loci. Genetic constraints on the identical-by-descent sharing probabilities, analogous to the "triangle" restrictions in the single-locus method, are derived, and the size and power of the test statistics are investigated. The method is applied to affected-sib-pair data, and the joint effects of IDDM1 (HLA) and IDDM2 (the INS VNTR) and of IDDM1 and IDDM4 (FGF3-linked) are assessed with relation to the development of IDDM. In the presence of genetic heterogeneity, there is seen to be a significant advantage in analyzing more than one locus simultaneously. Analysis of these families indicates that the effects at IDDM1 and IDDM2 are well described by a multiplicative genetic model, while those at IDDM1 and IDDM4 follow a heterogeneity model. PMID:7573054

  2. Joint-layer encoder optimization for HEVC scalable extensions

    NASA Astrophysics Data System (ADS)

    Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong

    2014-09-01

    Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.

  3. Manifold Matching: Joint Optimization of Fidelity and Commensurability

    DTIC Science & Technology

    2011-11-12

    identified separately in p◦m, will be geometrically incommensurate (see Figure 7). Thus the null distribution of the test statistic will be inflated...into the objective function obviates the geometric incommensurability phenomenon. Thus we can es- tablish that, for a range of Dirichlet product model...from the geometric incommensu- rability phenomenon. Then q p implies that cca suffers from the spurious correlation phe- nomenon with high probability

  4. Bivariate normal, conditional and rectangular probabilities: A computer program with applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.

    1980-01-01

    Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.

  5. Measurement of the top-quark mass with dilepton events selected using neuroevolution at CDF.

    PubMed

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Copic, K; Cordelli, M; Cortiana, G; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kusakabe, Y; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, S W; Leone, S; Lewis, J D; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlok, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shears, T; Shekhar, R; Shepard, P F; Sherman, D; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Whiteson, S; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S

    2009-04-17

    We report a measurement of the top-quark mass M_{t} in the dilepton decay channel tt[over ] --> bl;{'+} nu_{l};{'}b[over ]l;{-}nu[over ]_{l}. Events are selected with a neural network which has been directly optimized for statistical precision in top-quark mass using neuroevolution, a technique modeled on biological evolution. The top-quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb;{-1} of pp[over ] collisions collected with the CDF II detector, yielding a measurement of M_{t} = 171.2 +/- 2.7(stat) +/- 2.9(syst) GeV / c;{2}.

  6. Experimental investigation of the intensity fluctuation joint probability and conditional distributions of the twin-beam quantum state.

    PubMed

    Zhang, Yun; Kasai, Katsuyuki; Watanabe, Masayoshi

    2003-01-13

    We give the intensity fluctuation joint probability of the twin-beam quantum state, which was generated with an optical parametric oscillator operating above threshold. Then we present what to our knowledge is the first measurement of the intensity fluctuation conditional probability distributions of twin beams. The measured inference variance of twin beams 0.62+/-0.02, which is less than the standard quantum limit of unity, indicates inference with a precision better than that of separable states. The measured photocurrent variance exhibits a quantum correlation of as much as -4.9+/-0.2 dB between the signal and the idler.

  7. Joint Denoising/Compression of Image Contours via Shape Prior and Context Tree

    NASA Astrophysics Data System (ADS)

    Zheng, Amin; Cheung, Gene; Florencio, Dinei

    2018-07-01

    With the advent of depth sensing technologies, the extraction of object contours in images---a common and important pre-processing step for later higher-level computer vision tasks like object detection and human action recognition---has become easier. However, acquisition noise in captured depth images means that detected contours suffer from unavoidable errors. In this paper, we propose to jointly denoise and compress detected contours in an image for bandwidth-constrained transmission to a client, who can then carry out aforementioned application-specific tasks using the decoded contours as input. We first prove theoretically that in general a joint denoising / compression approach can outperform a separate two-stage approach that first denoises then encodes contours lossily. Adopting a joint approach, we first propose a burst error model that models typical errors encountered in an observed string y of directional edges. We then formulate a rate-constrained maximum a posteriori (MAP) problem that trades off the posterior probability p(x'|y) of an estimated string x' given y with its code rate R(x'). We design a dynamic programming (DP) algorithm that solves the posed problem optimally, and propose a compact context representation called total suffix tree (TST) that can reduce complexity of the algorithm dramatically. Experimental results show that our joint denoising / compression scheme outperformed a competing separate scheme in rate-distortion performance noticeably.

  8. Ceramic joints

    DOEpatents

    Miller, Bradley J.; Patten, Jr., Donald O.

    1991-01-01

    Butt joints between materials having different coefficients of thermal expansion are prepared having a reduced probability of failure of stress facture. This is accomplished by narrowing/tapering the material having the lower coefficient of thermal expansion in a direction away from the joint interface and not joining the narrow-tapered surface to the material having the higher coefficient of thermal expansion.

  9. Immediate effects of modified landing pattern on a probabilistic tibial stress fracture model in runners.

    PubMed

    Chen, T L; An, W W; Chan, Z Y S; Au, I P H; Zhang, Z H; Cheung, R T H

    2016-03-01

    Tibial stress fracture is a common injury in runners. This condition has been associated with increased impact loading. Since vertical loading rates are related to the landing pattern, many heelstrike runners attempt to modify their footfalls for a lower risk of tibial stress fracture. Such effect of modified landing pattern remains unknown. This study examined the immediate effects of landing pattern modification on the probability of tibial stress fracture. Fourteen experienced heelstrike runners ran on an instrumented treadmill and they were given augmented feedback for landing pattern switch. We measured their running kinematics and kinetics during different landing patterns. Ankle joint contact force and peak tibial strains were estimated using computational models. We used an established mathematical model to determine the effect of landing pattern on stress fracture probability. Heelstrike runners experienced greater impact loading immediately after landing pattern switch (P<0.004). There was an increase in the longitudinal ankle joint contact force when they landed with forefoot (P=0.003). However, there was no significant difference in both peak tibial strains and the risk of tibial stress fracture in runners with different landing patterns (P>0.986). Immediate transitioning of the landing pattern in heelstrike runners may not offer timely protection against tibial stress fracture, despite a reduction of impact loading. Long-term effects of landing pattern switch remains unknown. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. A Repeated Trajectory Class Model for Intensive Longitudinal Categorical Outcome

    PubMed Central

    Lin, Haiqun; Han, Ling; Peduzzi, Peter N.; Murphy, Terrence E.; Gill, Thomas M.; Allore, Heather G.

    2014-01-01

    This paper presents a novel repeated latent class model for a longitudinal response that is frequently measured as in our prospective study of older adults with monthly data on activities of daily living (ADL) for more than ten years. The proposed method is especially useful when the longitudinal response is measured much more frequently than other relevant covariates. The repeated trajectory classes represent distinct temporal patterns of the longitudinal response wherein an individual’s membership in the trajectory classes may renew or change over time. Within a trajectory class, the longitudinal response is modeled by a class-specific generalized linear mixed model. Effectively, an individual may remain in a trajectory class or switch to another as the class membership predictors are updated periodically over time. The identification of a common set of trajectory classes allows changes among the temporal patterns to be distinguished from local fluctuations in the response. An informative event such as death is jointly modeled by class-specific probability of the event through shared random effects. We do not impose the conditional independence assumption given the classes. The method is illustrated by analyzing the change over time in ADL trajectory class among 754 older adults with 70500 person-months of follow-up in the Precipitating Events Project. We also investigate the impact of jointly modeling the class-specific probability of the event on the parameter estimates in a simulation study. The primary contribution of our paper is the periodic updating of trajectory classes for a longitudinal categorical response without assuming conditional independence. PMID:24519416

  11. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review

    PubMed Central

    McClelland, James L.

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered. PMID:23970868

  12. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review.

    PubMed

    McClelland, James L

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.

  13. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    PubMed

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  14. Target Coverage in Wireless Sensor Networks with Probabilistic Sensors

    PubMed Central

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao

    2016-01-01

    Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902

  15. Estimation of vegetation cover at subpixel resolution using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1986-01-01

    The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.

  16. Future WGCEP Models and the Need for Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Field, E. H.

    2008-12-01

    The 2008 Working Group on California Earthquake Probabilities (WGCEP) recently released the Uniform California Earthquake Rupture Forecast version 2 (UCERF 2), developed jointly by the USGS, CGS, and SCEC with significant support from the California Earthquake Authority. Although this model embodies several significant improvements over previous WGCEPs, the following are some of the significant shortcomings that we hope to resolve in a future UCERF3: 1) assumptions of fault segmentation and the lack of fault-to-fault ruptures; 2) the lack of an internally consistent methodology for computing time-dependent, elastic-rebound-motivated renewal probabilities; 3) the lack of earthquake clustering/triggering effects; and 4) unwarranted model complexity. It is believed by some that physics-based earthquake simulators will be key to resolving these issues, either as exploratory tools to help guide the present statistical approaches, or as a means to forecast earthquakes directly (although significant challenges remain with respect to the latter).

  17. Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.

    PubMed

    Herzallah, Randa

    2015-03-01

    Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Multi-state models for colon cancer recurrence and death with a cured fraction.

    PubMed

    Conlon, A S C; Taylor, J M G; Sargent, D J

    2014-05-10

    In cancer clinical trials, patients often experience a recurrence of disease prior to the outcome of interest, overall survival. Additionally, for many cancers, there is a cured fraction of the population who will never experience a recurrence. There is often interest in how different covariates affect the probability of being cured of disease and the time to recurrence, time to death, and time to death after recurrence. We propose a multi-state Markov model with an incorporated cured fraction to jointly model recurrence and death in colon cancer. A Bayesian estimation strategy is used to obtain parameter estimates. The model can be used to assess how individual covariates affect the probability of being cured and each of the transition rates. Checks for the adequacy of the model fit and for the functional forms of covariates are explored. The methods are applied to data from 12 randomized trials in colon cancer, where we show common effects of specific covariates across the trials. Copyright © 2013 John Wiley & Sons, Ltd.

  19. Joint analysis of epistemic and aleatory uncertainty in stability analysis for geo-hazard assessments

    NASA Astrophysics Data System (ADS)

    Rohmer, Jeremy; Verdel, Thierry

    2017-04-01

    Uncertainty analysis is an unavoidable task of stability analysis of any geotechnical systems. Such analysis usually relies on the safety factor SF (if SF is below some specified threshold), the failure is possible). The objective of the stability analysis is then to estimate the failure probability P for SF to be below the specified threshold. When dealing with uncertainties, two facets should be considered as outlined by several authors in the domain of geotechnics, namely "aleatoric uncertainty" (also named "randomness" or "intrinsic variability") and "epistemic uncertainty" (i.e. when facing "vague, incomplete or imprecise information" such as limited databases and observations or "imperfect" modelling). The benefits of separating both facets of uncertainty can be seen from a risk management perspective because: - Aleatoric uncertainty, being a property of the system under study, cannot be reduced. However, practical actions can be taken to circumvent the potentially dangerous effects of such variability; - Epistemic uncertainty, being due to the incomplete/imprecise nature of available information, can be reduced by e.g., increasing the number of tests (lab or in site survey), improving the measurement methods or evaluating calculation procedure with model tests, confronting more information sources (expert opinions, data from literature, etc.). Uncertainty treatment in stability analysis usually restricts to the probabilistic framework to represent both facets of uncertainty. Yet, in the domain of geo-hazard assessments (like landslides, mine pillar collapse, rockfalls, etc.), the validity of this approach can be debatable. In the present communication, we propose to review the major criticisms available in the literature against the systematic use of probability in situations of high degree of uncertainty. On this basis, the feasibility of using a more flexible uncertainty representation tool is then investigated, namely Possibility distributions (e.g., Baudrit et al., 2007) for geo-hazard assessments. A graphical tool is then developed to explore: 1. the contribution of both types of uncertainty, aleatoric and epistemic; 2. the regions of the imprecise or random parameters which contribute the most to the imprecision on the failure probability P. The method is applied on two case studies (a mine pillar and a steep slope stability analysis, Rohmer and Verdel, 2014) to investigate the necessity for extra data acquisition on parameters whose imprecision can hardly be modelled by probabilities due to the scarcity of the available information (respectively the extraction ratio and the cliff geometry). References Baudrit, C., Couso, I., & Dubois, D. (2007). Joint propagation of probability and possibility in risk analysis: Towards a formal framework. International Journal of Approximate Reasoning, 45(1), 82-105. Rohmer, J., & Verdel, T. (2014). Joint exploration of regional importance of possibilistic and probabilistic uncertainty in stability analysis. Computers and Geotechnics, 61, 308-315.

  20. High-resolution urban flood modelling - a joint probability approach

    NASA Astrophysics Data System (ADS)

    Hartnett, Michael; Olbert, Agnieszka; Nash, Stephen

    2017-04-01

    The hydrodynamic modelling of rapid flood events due to extreme climatic events in urban environment is both a complex and challenging task. The horizontal resolution necessary to resolve complexity of urban flood dynamics is a critical issue; the presence of obstacles of varying shapes and length scales, gaps between buildings and the complex geometry of the city such as slopes affect flow paths and flood levels magnitudes. These small scale processes require a high resolution grid to be modelled accurately (2m or less, Olbert et al., 2015; Hunter et al., 2008; Brown et al., 2007) and, therefore, altimetry data of at least the same resolution. Along with availability of high-resolution LiDAR data and computational capabilities, as well as state of the art nested modelling approaches, these problems can now be overcome. Flooding and drying, domain definition, frictional resistance and boundary descriptions are all important issues to be addressed when modelling urban flooding. In recent years, the number of urban flood models dramatically increased giving a good insight into various modelling problems and solutions (Mark et al., 2004; Mason et al., 2007; Fewtrell et al., 2008; Shubert et al., 2008). Despite extensive modelling work conducted for fluvial (e.g. Mignot et al., 2006; Hunter et al., 2008; Yu and Lane, 2006) and coastal mechanisms of flooding (e.g. Gallien et al., 2011; Yang et al., 2012), the amount of investigations into combined coastal-fluvial flooding is still very limited (e.g. Orton et al., 2012; Lian et al., 2013). This is surprising giving the extent of flood consequences when both mechanisms occur simultaneously, which usually happens when they are driven by one process such as a storm. The reason for that could be the fact that the likelihood of joint event is much smaller than those of any of the two contributors occurring individually, because for fast moving storms the rainfall-driven fluvial flood arrives usually later than the storm surge (Divoky et al., 2005). Nevertheless, such events occur and in Ireland alone there are several cases of serious damage due to flooding resulting from a combination of high sea water levels and river flows driven by the same meteorological conditions (e.g. Olbert et al. 2015). A November 2009 fluvial-coastal flooding of Cork City bringing €100m loss was one such incident. This event was used by Olbert et al. (2015) to determine processes controlling urban flooding and is further explored in this study to elaborate on coastal and fluvial flood mechanisms and their roles in controlling water levels. The objective of this research is to develop a methodology to assess combined effect of multiple source flooding on flood probability and severity in urban areas and to establish a set of conditions that dictate urban flooding due to extreme climatic events. These conditions broadly combine physical flood drivers (such as coastal and fluvial processes), their mechanisms and thresholds defining flood severity. The two main physical processes controlling urban flooding: high sea water levels (coastal flooding) and high river flows (fluvial flooding), and their threshold values for which flood is likely to occur, are considered in this study. Contribution of coastal and fluvial drivers to flooding and their impacts are assessed in a two-step process. The first step involves frequency analysis and extreme value statistical modelling of storm surges, tides and river flows and ultimately the application of joint probability method to estimate joint exceedence return periods for combination of surges, tide and river flows. In the second step, a numerical model of Cork Harbour MSN_Flood comprising a cascade of four nested high-resolution models is used to perform simulation of flood inundation under numerous hypothetical coastal and fluvial flood scenarios. The risk of flooding is quantified based on a range of physical aspects such as the extent and depth of inundation (Apel et al., 2008) The methodology includes estimates of flood probabilities due to coastal- and fluvial-driven processes occurring individually or jointly, mechanisms of flooding and their impacts on urban environment. Various flood scenarios are examined in order to demonstrate that this methodology is necessary to quantify the important physical processes in coastal flood predictions. Cork City, located on the south of Ireland subject to frequent coastal-fluvial flooding, is used as a study case.

  1. Time evolution of a pair of distinguishable interacting spins subjected to controllable and noisy magnetic fields

    NASA Astrophysics Data System (ADS)

    Grimaudo, R.; Belousov, Yu.; Nakazato, H.; Messina, A.

    2018-05-01

    The quantum dynamics of a Jˆ2 =(jˆ1 +jˆ2) 2-conserving Hamiltonian model describing two coupled spins jˆ1 and jˆ2 under controllable and fluctuating time-dependent magnetic fields is investigated. Each eigenspace of Jˆ2 is dynamically invariant and the Hamiltonian of the total system restricted to any one of such (j1 +j2) - |j1 -j2 | + 1 eigenspaces, possesses the SU(2) structure of the Hamiltonian of a single fictitious spin acted upon by the total magnetic field. We show that such a reducibility holds regardless of the time dependence of the externally applied field as well as of the statistical properties of the noise, here represented as a classical fluctuating magnetic field. The time evolution of the joint transition probabilities of the two spins jˆ1 and jˆ2 between two prefixed factorized states is examined, bringing to light peculiar dynamical properties of the system under scrutiny. When the noise-induced non-unitary dynamics of the two coupled spins is properly taken into account, analytical expressions for the joint Landau-Zener transition probabilities are reported. The possibility of extending the applicability of our results to other time-dependent spin models is pointed out.

  2. Determining the Best Treatment for Coronal Angular Deformity of the Knee Joint in Growing Children: A Decision Analysis

    PubMed Central

    Sung, Ki Hyuk; Chung, Chin Youb; Lee, Kyoung Min; Lee, Seung Yeol; Choi, In Ho; Cho, Tae-Joon; Yoo, Won Joon; Park, Moon Seok

    2014-01-01

    This study aimed to determine the best treatment modality for coronal angular deformity of the knee joint in growing children using decision analysis. A decision tree was created to evaluate 3 treatment modalities for coronal angular deformity in growing children: temporary hemiepiphysiodesis using staples, percutaneous screws, or a tension band plate. A decision analysis model was constructed containing the final outcome score, probability of metal failure, and incomplete correction of deformity. The final outcome was defined as health-related quality of life and was used as a utility in the decision tree. The probabilities associated with each case were obtained by literature review, and health-related quality of life was evaluated by a questionnaire completed by 25 pediatric orthopedic experts. Our decision analysis model favored temporary hemiepiphysiodesis using a tension band plate over temporary hemiepiphysiodesis using percutaneous screws or stapling, with utilities of 0.969, 0.957, and 0.962, respectively. One-way sensitivity analysis showed that hemiepiphysiodesis using a tension band plate was better than temporary hemiepiphysiodesis using percutaneous screws, when the overall complication rate of hemiepiphysiodesis using a tension band plate was lower than 15.7%. Two-way sensitivity analysis showed that hemiepiphysiodesis using a tension band plate was more beneficial than temporary hemiepiphysiodesis using percutaneous screws. PMID:25276801

  3. Dynamic Uncertain Causality Graph for Knowledge Representation and Probabilistic Reasoning: Directed Cyclic Graph and Joint Probability Distribution.

    PubMed

    Zhang, Qin

    2015-07-01

    Probabilistic graphical models (PGMs) such as Bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning. Dynamic uncertain causality graph (DUCG) is a newly presented model of PGMs, which can be applied to fault diagnosis of large and complex industrial systems, disease diagnosis, and so on. The basic methodology of DUCG has been previously presented, in which only the directed acyclic graph (DAG) was addressed. However, the mathematical meaning of DUCG was not discussed. In this paper, the DUCG with directed cyclic graphs (DCGs) is addressed. In contrast, BN does not allow DCGs, as otherwise the conditional independence will not be satisfied. The inference algorithm for the DUCG with DCGs is presented, which not only extends the capabilities of DUCG from DAGs to DCGs but also enables users to decompose a large and complex DUCG into a set of small, simple sub-DUCGs, so that a large and complex knowledge base can be easily constructed, understood, and maintained. The basic mathematical definition of a complete DUCG with or without DCGs is proved to be a joint probability distribution (JPD) over a set of random variables. The incomplete DUCG as a part of a complete DUCG may represent a part of JPD. Examples are provided to illustrate the methodology.

  4. The perception of probability.

    PubMed

    Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E

    2014-01-01

    We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  5. Comparison of dynamic treatment regimes via inverse probability weighting.

    PubMed

    Hernán, Miguel A; Lanoy, Emilie; Costagliola, Dominique; Robins, James M

    2006-03-01

    Appropriate analysis of observational data is our best chance to obtain answers to many questions that involve dynamic treatment regimes. This paper describes a simple method to compare dynamic treatment regimes by artificially censoring subjects and then using inverse probability weighting (IPW) to adjust for any selection bias introduced by the artificial censoring. The basic strategy can be summarized in four steps: 1) define two regimes of interest, 2) artificially censor individuals when they stop following one of the regimes of interest, 3) estimate inverse probability weights to adjust for the potential selection bias introduced by censoring in the previous step, 4) compare the survival of the uncensored individuals under each regime of interest by fitting an inverse probability weighted Cox proportional hazards model with the dichotomous regime indicator and the baseline confounders as covariates. In the absence of model misspecification, the method is valid provided data are available on all time-varying and baseline joint predictors of survival and regime discontinuation. We present an application of the method to compare the AIDS-free survival under two dynamic treatment regimes in a large prospective study of HIV-infected patients. The paper concludes by discussing the relative advantages and disadvantages of censoring/IPW versus g-estimation of nested structural models to compare dynamic regimes.

  6. Synchrony in Joint Action Is Directed by Each Participant’s Motor Control System

    PubMed Central

    Noy, Lior; Weiser, Netta; Friedman, Jason

    2017-01-01

    In this work, we ask how the probability of achieving synchrony in joint action is affected by the choice of motion parameters of each individual. We use the mirror game paradigm to study how changes in leader’s motion parameters, specifically frequency and peak velocity, affect the probability of entering the state of co-confidence (CC) motion: a dyadic state of synchronized, smooth and co-predictive motions. In order to systematically study this question, we used a one-person version of the mirror game, where the participant mirrored piece-wise rhythmic movements produced by a computer on a graphics tablet. We systematically varied the frequency and peak velocity of the movements to determine how these parameters affect the likelihood of synchronized joint action. To assess synchrony in the mirror game we used the previously developed marker of co-confident (CC) motions: smooth, jitter-less and synchronized motions indicative of co-predicative control. We found that when mirroring movements with low frequencies (i.e., long duration movements), the participants never showed CC, and as the frequency of the stimuli increased, the probability of observing CC also increased. This finding is discussed in the framework of motor control studies showing an upper limit on the duration of smooth motion. We confirmed the relationship between motion parameters and the probability to perform CC with three sets of data of open-ended two-player mirror games. These findings demonstrate that when performing movements together, there are optimal movement frequencies to use in order to maximize the possibility of entering a state of synchronized joint action. It also shows that the ability to perform synchronized joint action is constrained by the properties of our motor control systems. PMID:28443047

  7. Regional analysis and derivation of copula-based drought Severity-Area-Frequency curve in Lake Urmia basin, Iran.

    PubMed

    Amirataee, Babak; Montaseri, Majid; Rezaie, Hossein

    2018-01-15

    Droughts are extreme events characterized by temporal duration and spatial large-scale effects. In general, regional droughts are affected by general circulation of the atmosphere (at large-scale) and regional natural factors, including the topography, natural lakes, the position relative to the center and the path of the ocean currents (at small-scale), and they don't cover the exact same effects in a wide area. Therefore, drought Severity-Area-Frequency (S-A-F) curve investigation is an essential task to develop decision making rule for regional drought management. This study developed the copula-based joint probability distribution of drought severity and percent of area under drought across the Lake Urmia basin, Iran. To do this end, one-month Standardized Precipitation Index (SPI) values during the 1971-2013 were applied across 24 rainfall stations in the study area. Then, seven copula functions of various families, including Clayton, Gumbel, Frank, Joe, Galambos, Plackett and Normal copulas, were used to model the joint probability distribution of drought severity and drought area. Using AIC, BIC and RMSE criteria, the Frank copula was selected as the most appropriate copula in order to develop the joint probability distribution of severity-percent of area under drought across the study area. Based on the Frank copula, the drought S-A-F curve for the study area was derived. The results indicated that severe/extreme drought and non-drought (wet) behaviors have affected the majority of study areas (Lake Urmia basin). However, the area covered by the specific semi-drought effects is limited and has been subject to significant variations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Economic Statistical Design of Integrated X-bar-S Control Chart with Preventive Maintenance and General Failure Distribution

    PubMed Central

    Caballero Morales, Santiago Omar

    2013-01-01

    The application of Preventive Maintenance (PM) and Statistical Process Control (SPC) are important practices to achieve high product quality, small frequency of failures, and cost reduction in a production process. However there are some points that have not been explored in depth about its joint application. First, most SPC is performed with the X-bar control chart which does not fully consider the variability of the production process. Second, many studies of design of control charts consider just the economic aspect while statistical restrictions must be considered to achieve charts with low probabilities of false detection of failures. Third, the effect of PM on processes with different failure probability distributions has not been studied. Hence, this paper covers these points, presenting the Economic Statistical Design (ESD) of joint X-bar-S control charts with a cost model that integrates PM with general failure distribution. Experiments showed statistically significant reductions in costs when PM is performed on processes with high failure rates and reductions in the sampling frequency of units for testing under SPC. PMID:23527082

  9. On the apparent insignificance of the randomness of flexible joints on large space truss dynamics

    NASA Technical Reports Server (NTRS)

    Koch, R. M.; Klosner, J. M.

    1993-01-01

    Deployable periodic large space structures have been shown to exhibit high dynamic sensitivity to period-breaking imperfections and uncertainties. These can be brought on by manufacturing or assembly errors, structural imperfections, as well as nonlinear and/or nonconservative joint behavior. In addition, the necessity of precise pointing and position capability can require the consideration of these usually negligible and unknown parametric uncertainties and their effect on the overall dynamic response of large space structures. This work describes the use of a new design approach for the global dynamic solution of beam-like periodic space structures possessing parametric uncertainties. Specifically, the effect of random flexible joints on the free vibrations of simply-supported periodic large space trusses is considered. The formulation is a hybrid approach in terms of an extended Timoshenko beam continuum model, Monte Carlo simulation scheme, and first-order perturbation methods. The mean and mean-square response statistics for a variety of free random vibration problems are derived for various input random joint stiffness probability distributions. The results of this effort show that, although joint flexibility has a substantial effect on the modal dynamic response of periodic large space trusses, the effect of any reasonable uncertainty or randomness associated with these joint flexibilities is insignificant.

  10. A k-omega multivariate beta PDF for supersonic turbulent combustion

    NASA Technical Reports Server (NTRS)

    Alexopoulos, G. A.; Baurle, R. A.; Hassan, H. A.

    1993-01-01

    In a recent attempt by the authors at predicting measurements in coaxial supersonic turbulent reacting mixing layers involving H2 and air, a number of discrepancies involving the concentrations and their variances were noted. The turbulence model employed was a one-equation model based on the turbulent kinetic energy. This required the specification of a length scale. In an attempt at detecting the cause of the discrepancy, a coupled k-omega joint probability density function (PDF) is employed in conjunction with a Navier-Stokes solver. The results show that improvements resulting from a k-omega model are quite modest.

  11. Multivariate meta-analysis of individual participant data helped externally validate the performance and implementation of a prediction model.

    PubMed

    Snell, Kym I E; Hua, Harry; Debray, Thomas P A; Ensor, Joie; Look, Maxime P; Moons, Karel G M; Riley, Richard D

    2016-01-01

    Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  12. Independent events in elementary probability theory

    NASA Astrophysics Data System (ADS)

    Csenki, Attila

    2011-07-01

    In Probability and Statistics taught to mathematicians as a first introduction or to a non-mathematical audience, joint independence of events is introduced by requiring that the multiplication rule is satisfied. The following statement is usually tacitly assumed to hold (and, at best, intuitively motivated): If the n events E 1, E 2, … , E n are jointly independent then any two events A and B built in finitely many steps from two disjoint subsets of E 1, E 2, … , E n are also independent. The operations 'union', 'intersection' and 'complementation' are permitted only when forming the events A and B. Here we examine this statement from the point of view of elementary probability theory. The approach described here is accessible also to users of probability theory and is believed to be novel.

  13. Are Subject-Specific Musculoskeletal Models Robust to the Uncertainties in Parameter Identification?

    PubMed Central

    Valente, Giordano; Pitto, Lorenzo; Testi, Debora; Seth, Ajay; Delp, Scott L.; Stagni, Rita; Viceconti, Marco; Taddei, Fulvia

    2014-01-01

    Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be affected by an uncertainty in the same order of magnitude of its value, although this condition has low probability to occur. PMID:25390896

  14. Performance of two predictive uncertainty estimation approaches for conceptual Rainfall-Runoff Model: Bayesian Joint Inference and Hydrologic Uncertainty Post-processing

    NASA Astrophysics Data System (ADS)

    Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix

    2017-04-01

    It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter estimation with the Bayesian Joint Inference methodology.

  15. An estimation method of the direct benefit of a waterlogging control project applicable to the changing environment

    NASA Astrophysics Data System (ADS)

    Zengmei, L.; Guanghua, Q.; Zishen, C.

    2015-05-01

    The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The results show that the waterlogging control benefit estimation model constructed is applicable to the changing conditions that occur in both the disaster-inducing environment of the waterlogging-prone zone and disaster-bearing bodies, considering all conditions when rainstorms of all frequencies meet different water levels in the drainage-accepting zone. Thus, the estimation method of waterlogging control benefit can reflect the actual situation more objectively, and offer a scientific basis for rational decision-making for waterlogging control projects.

  16. Joint nociceptor nerve activity and pain in an animal model of acute gout and its modulation by intra-articular hyaluronan

    PubMed Central

    Marcotti, Aida; Miralles, Ana; Dominguez, Eduardo; Pascual, Eliseo; Gomis, Ana; Belmonte, Carlos; de la Peña, Elvira

    2018-01-01

    Abstract The mechanisms whereby deposition of monosodium urate (MSU) crystals in gout activates nociceptors to induce joint pain are incompletely understood. We tried to reproduce the signs of painful gouty arthritis, injecting into the knee joint of rats suspensions containing amorphous or triclinic, needle MSU crystals. The magnitude of MSU-induced inflammation and pain behavior signs were correlated with the changes in firing frequency of spontaneous and movement-evoked nerve impulse activity recorded in single knee joint nociceptor saphenous nerve fibers. Joint swelling, mechanical and cold allodynia, and hyperalgesia appeared 3 hours after joint injection of MSU crystals. In parallel, spontaneous and movement-evoked joint nociceptor impulse activity raised significantly. Solutions containing amorphous or needle-shaped MSU crystals had similar inflammatory and electrophysiological effects. Intra-articular injection of hyaluronan (HA, Synvisc), a high-MW glycosaminoglycan present in the synovial fluid with analgesic effects in osteoarthritis, significantly reduced MSU-induced behavioral signs of pain and decreased the enhanced joint nociceptor activity. Our results support the interpretation that pain and nociceptor activation are not triggered by direct mechanical stimulation of nociceptors by MSU crystals, but are primarily caused by the release of excitatory mediators by inflammatory cells activated by MSU crystals. Intra-articular HA decreased behavioral and electrophysiological signs of pain, possibly through its viscoelastic filtering effect on the mechanical forces acting over sensitized joint sensory endings and probably also by a direct interaction of HA molecules with the transducing channels expressed in joint nociceptor terminals. PMID:29319609

  17. Topological characterization of antireflective and hydrophobic rough surfaces: are random process theory and fractal modeling applicable?

    NASA Astrophysics Data System (ADS)

    Borri, Claudia; Paggi, Marco

    2015-02-01

    The random process theory (RPT) has been widely applied to predict the joint probability distribution functions (PDFs) of asperity heights and curvatures of rough surfaces. A check of the predictions of RPT against the actual statistics of numerically generated random fractal surfaces and of real rough surfaces has been only partially undertaken. The present experimental and numerical study provides a deep critical comparison on this matter, providing some insight into the capabilities and limitations in applying RPT and fractal modeling to antireflective and hydrophobic rough surfaces, two important types of textured surfaces. A multi-resolution experimental campaign using a confocal profilometer with different lenses is carried out and a comprehensive software for the statistical description of rough surfaces is developed. It is found that the topology of the analyzed textured surfaces cannot be fully described according to RPT and fractal modeling. The following complexities emerge: (i) the presence of cut-offs or bi-fractality in the power-law power-spectral density (PSD) functions; (ii) a more pronounced shift of the PSD by changing resolution as compared to what was expected from fractal modeling; (iii) inaccuracy of the RPT in describing the joint PDFs of asperity heights and curvatures of textured surfaces; (iv) lack of resolution-invariance of joint PDFs of textured surfaces in case of special surface treatments, not accounted for by fractal modeling.

  18. Recognizing human actions by learning and matching shape-motion prototype trees.

    PubMed

    Jiang, Zhuolin; Lin, Zhe; Davis, Larry S

    2012-03-01

    A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set.

  19. Coupled Monte Carlo Probability Density Function/ SPRAY/CFD Code Developed for Modeling Gas-Turbine Combustor Flows

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The success of any solution methodology for studying gas-turbine combustor flows depends a great deal on how well it can model various complex, rate-controlling processes associated with turbulent transport, mixing, chemical kinetics, evaporation and spreading rates of the spray, convective and radiative heat transfer, and other phenomena. These phenomena often strongly interact with each other at disparate time and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. Turbulence manifests its influence in a diffusion flame in several forms depending on how turbulence interacts with various flame scales. These forms range from the so-called wrinkled, or stretched, flamelets regime, to the distributed combustion regime. Conventional turbulence closure models have difficulty in treating highly nonlinear reaction rates. A solution procedure based on the joint composition probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices such as extinction, blowoff limits, and emissions predictions because it can handle the nonlinear chemical reaction rates without any approximation. In this approach, mean and turbulence gas-phase velocity fields are determined from a standard turbulence model; the joint composition field of species and enthalpy are determined from the solution of a modeled PDF transport equation; and a Lagrangian-based dilute spray model is used for the liquid-phase representation with appropriate consideration of the exchanges of mass, momentum, and energy between the two phases. The PDF transport equation is solved by a Monte Carlo method, and existing state-of-the-art numerical representations are used to solve the mean gasphase velocity and turbulence fields together with the liquid-phase equations. The joint composition PDF approach was extended in our previous work to the study of compressible reacting flows. The application of this method to several supersonic diffusion flames associated with scramjet combustor flow fields provided favorable comparisons with the available experimental data. A further extension of this approach to spray flames, three-dimensional computations, and parallel computing was reported in a recent paper. The recently developed PDF/SPRAY/computational fluid dynamics (CFD) module combines the novelty of the joint composition PDF approach with the ability to run on parallel architectures. This algorithm was implemented on the NASA Lewis Research Center's Cray T3D, a massively parallel computer with an aggregate of 64 processor elements. The calculation procedure was applied to predict the flow properties of both open and confined swirl-stabilized spray flames.

  20. NASA Instrument Cost/Schedule Model

    NASA Technical Reports Server (NTRS)

    Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George

    2011-01-01

    NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.

  1. Classic Maximum Entropy Recovery of the Average Joint Distribution of Apparent FRET Efficiency and Fluorescence Photons for Single-molecule Burst Measurements

    PubMed Central

    DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.

    2012-01-01

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions. PMID:22338694

  2. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  3. The Gaussian copula model for the joint deficit index for droughts

    NASA Astrophysics Data System (ADS)

    Van de Vyver, H.; Van den Bergh, J.

    2018-06-01

    The characterization of droughts and their impacts is very dependent on the time scale that is involved. In order to obtain an overall drought assessment, the cumulative effects of water deficits over different times need to be examined together. For example, the recently developed joint deficit index (JDI) is based on multivariate probabilities of precipitation over various time scales from 1- to 12-months, and was constructed from empirical copulas. In this paper, we examine the Gaussian copula model for the JDI. We model the covariance across the temporal scales with a two-parameter function that is commonly used in the specific context of spatial statistics or geostatistics. The validity of the covariance models is demonstrated with long-term precipitation series. Bootstrap experiments indicate that the Gaussian copula model has advantages over the empirical copula method in the context of drought severity assessment: (i) it is able to quantify droughts outside the range of the empirical copula, (ii) provides adequate drought quantification, and (iii) provides a better understanding of the uncertainty in the estimation.

  4. Estimating the personal cure rate of cancer patients using population-based grouped cancer survival data.

    PubMed

    Binbing Yu; Tiwari, Ram C; Feuer, Eric J

    2011-06-01

    Cancer patients are subject to multiple competing risks of death and may die from causes other than the cancer diagnosed. The probability of not dying from the cancer diagnosed, which is one of the patients' main concerns, is sometimes called the 'personal cure' rate. Two approaches of modelling competing-risk survival data, namely the cause-specific hazards approach and the mixture model approach, have been used to model competing-risk survival data. In this article, we first show the connection and differences between crude cause-specific survival in the presence of other causes and net survival in the absence of other causes. The mixture survival model is extended to population-based grouped survival data to estimate the personal cure rate. Using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results Programme, we estimate the probabilities of dying from colorectal cancer, heart disease, and other causes by age at diagnosis, race and American Joint Committee on Cancer stage.

  5. Probable flood predictions in ungauged coastal basins of El Salvador

    USGS Publications Warehouse

    Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.

    2008-01-01

    A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.

  6. GENERAL A Hierarchy of Compatibility and Comeasurability Levels in Quantum Logics with Unique Conditional Probabilities

    NASA Astrophysics Data System (ADS)

    Gerd, Niestegge

    2010-12-01

    In the quantum mechanical Hilbert space formalism, the probabilistic interpretation is a later ad-hoc add-on, more or less enforced by the experimental evidence, but not motivated by the mathematical model itself. A model involving a clear probabilistic interpretation from the very beginning is provided by the quantum logics with unique conditional probabilities. It includes the projection lattices in von Neumann algebras and here probability conditionalization becomes identical with the state transition of the Lüders-von Neumann measurement process. This motivates the definition of a hierarchy of five compatibility and comeasurability levels in the abstract setting of the quantum logics with unique conditional probabilities. Their meanings are: the absence of quantum interference or influence, the existence of a joint distribution, simultaneous measurability, and the independence of the final state after two successive measurements from the sequential order of these two measurements. A further level means that two elements of the quantum logic (events) belong to the same Boolean subalgebra. In the general case, the five compatibility and comeasurability levels appear to differ, but they all coincide in the common Hilbert space formalism of quantum mechanics, in von Neumann algebras, and in some other cases.

  7. The Contextuality Loophole is Fatal for the Derivation of Bell Inequalities: Reply to a Comment by I. Schmelzer

    NASA Astrophysics Data System (ADS)

    Nieuwenhuizen, Theodorus M.; Kupczynski, Marian

    2017-02-01

    Ilya Schmelzer wrote recently: Nieuwenhuizen argued that there exists some "contextuality loophole" in Bell's theorem. This claim in unjustified. It is made clear that this arose from attaching a meaning to the title and the content of the paper different from the one intended by Nieuwenhuizen. "Contextual loophole" means only that if the supplementary parameters describing measuring instruments are correctly introduced, Bell and Bell-type inequalities may not be proven. It is also stressed that a hidden variable model suffers from a "contextuality loophole" if it tries to describe different sets of incompatible experiments using a unique probability space and a unique joint probability distribution.

  8. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  9. Quasi-probabilities in conditioned quantum measurement and a geometric/statistical interpretation of Aharonov's weak value

    NASA Astrophysics Data System (ADS)

    Lee, Jaeha; Tsutsui, Izumi

    2017-05-01

    We show that the joint behavior of an arbitrary pair of (generally noncommuting) quantum observables can be described by quasi-probabilities, which are an extended version of the standard probabilities used for describing the outcome of measurement for a single observable. The physical situations that require these quasi-probabilities arise when one considers quantum measurement of an observable conditioned by some other variable, with the notable example being the weak measurement employed to obtain Aharonov's weak value. Specifically, we present a general prescription for the construction of quasi-joint probability (QJP) distributions associated with a given combination of observables. These QJP distributions are introduced in two complementary approaches: one from a bottom-up, strictly operational construction realized by examining the mathematical framework of the conditioned measurement scheme, and the other from a top-down viewpoint realized by applying the results of the spectral theorem for normal operators and their Fourier transforms. It is then revealed that, for a pair of simultaneously measurable observables, the QJP distribution reduces to the unique standard joint probability distribution of the pair, whereas for a noncommuting pair there exists an inherent indefiniteness in the choice of such QJP distributions, admitting a multitude of candidates that may equally be used for describing the joint behavior of the pair. In the course of our argument, we find that the QJP distributions furnish the space of operators in the underlying Hilbert space with their characteristic geometric structures such that the orthogonal projections and inner products of observables can be given statistical interpretations as, respectively, “conditionings” and “correlations”. The weak value Aw for an observable A is then given a geometric/statistical interpretation as either the orthogonal projection of A onto the subspace generated by another observable B, or equivalently, as the conditioning of A given B with respect to the QJP distribution under consideration.

  10. Self-control and its relation to joint developmental trajectories of cannabis use and depressive mood symptoms.

    PubMed

    Otten, Roy; Barker, Edward D; Maughan, Barbara; Arseneault, Louise; Engels, Rutger C M E

    2010-12-01

    Cannabis use and depressive mood symptoms in adolescence have been found to co-occur. In exploring the nature of this relationship and in the search for mechanisms that explain this link, scholars have postulated the idea for a 'common liability model'. According to this model, the link between cannabis use and depressive symptoms can be explained by an underlying risk factor. One important candidate for this underlying risk factor may be self-control, as a reflection of immature self-regulatory systems in adolescence. In the present study, we will test the extent to which joint development of cannabis use and depressive symptoms can be explained as an expression of self-control. A total of 428 adolescents participated in a five-wave longitudinal design. Main study outcomes were self-reports of self-control (age 12) and cannabis use and depressive symptoms (ages 12-16). We established six trajectories of joint development of cannabis use and depressive symptoms. Conditional probabilities indicated that cannabis use and depressive symptoms were symmetrically related. Levels of self-control were lowest for adolescents following the joint developmental pathway of cannabis use and high depressive symptoms. Low levels of self-control are predictive of joint development of cannabis use and depressive symptoms. Future studies should concentrate on the role of self-control in co-occurrence of other health risk behaviors and on psychological and physiological mechanisms underlying self-control and its relation to co-occurrence. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  11. Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.

    PubMed

    Zhang, Yue; Berhane, Kiros

    2016-01-01

    We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.

  12. Does joint line elevation after revision knee arthroplasty affect tibio-femoral kinematics, contact pressure or collateral ligament lengths? An in vitro analysis

    PubMed Central

    Kowalczewski, Jacek B.; Chevalier, Yan; Okon, Tomasz; Innocenti, Bernardo; Bellemans, Johan

    2015-01-01

    Introduction Correct restoration of the joint line is generally considered as crucial when performing total knee arthroplasty (TKA). During revision knee arthroplasty however, elevation of the joint line occurs frequently. The general belief is that this negatively affects the clinical outcome, but the reasons are still not well understood. Material and methods In this cadaveric in vitro study the biomechanical consequences of joint line elevation were investigated using a previously validated cadaver model simulating active deep knee squats and passive flexion-extension cycles. Knee specimens were sequentially tested after total knee arthroplasty with joint line restoration and after 4 mm joint line elevation. Results The tibia rotated internally with increasing knee flexion during both passive and squatting motion (range: 17° and 7° respectively). Joint line elevation of 4 mm did not make a statistically significant difference. During passive motion, the tibia tended to become slightly more adducted with increasing knee flexion (range: 2°), while it went into slighlty less adduction during squatting (range: –2°). Neither of both trends was influenced by joint line elevation. Also anteroposterior translation of the femoral condyle centres was not affected by joint line elevation, although there was a tendency for a small posterior shift (of about 3 mm) during squatting after joint line elevation. In terms of kinetics, ligaments lengths and length changes, tibiofemoral contact pressures and quadriceps forces all showed the same patterns before and joint line elevation. No statistically significant changes could be detected. Conclusions Our study suggests that joint line elevation by 4 mm in revision total knee arthroplasty does not cause significant kinematic and kinetic differences during passive flexion/extension movement and squatting in the tibio-femoral joint, nor does it affect the elongation patterns of collateral ligaments. Therefore, clinical problems after joint line elevation are probably situated in the patello-femoral joint or caused by joint line elevation of more than 4 mm. PMID:25995746

  13. On the motion of classical three-body system with consideration of quantum fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gevorkyan, A. S., E-mail: g-ashot@sci.am

    2017-03-15

    We obtained the systemof stochastic differential equations which describes the classicalmotion of the three-body system under influence of quantum fluctuations. Using SDEs, for the joint probability distribution of the total momentum of bodies system were obtained the partial differential equation of the second order. It is shown, that the equation for the probability distribution is solved jointly by classical equations, which in turn are responsible for the topological peculiarities of tubes of quantum currents, transitions between asymptotic channels and, respectively for arising of quantum chaos.

  14. First-passage problems: A probabilistic dynamic analysis for degraded structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1990-01-01

    Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.

  15. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    PubMed

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  16. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events

    PubMed Central

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate. PMID:28066225

  17. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  18. Probability judgments under ambiguity and conflict

    PubMed Central

    Smithson, Michael

    2015-01-01

    Whether conflict and ambiguity are distinct kinds of uncertainty remains an open question, as does their joint impact on judgments of overall uncertainty. This paper reviews recent advances in our understanding of human judgment and decision making when both ambiguity and conflict are present, and presents two types of testable models of judgments under conflict and ambiguity. The first type concerns estimate-pooling to arrive at “best” probability estimates. The second type is models of subjective assessments of conflict and ambiguity. These models are developed for dealing with both described and experienced information. A framework for testing these models in the described-information setting is presented, including a reanalysis of a multi-nation data-set to test best-estimate models, and a study of participants' assessments of conflict, ambiguity, and overall uncertainty reported by Smithson (2013). A framework for research in the experienced-information setting is then developed, that differs substantially from extant paradigms in the literature. This framework yields new models of “best” estimates and perceived conflict. The paper concludes with specific suggestions for future research on judgment and decision making under conflict and ambiguity. PMID:26042081

  19. Probability judgments under ambiguity and conflict.

    PubMed

    Smithson, Michael

    2015-01-01

    Whether conflict and ambiguity are distinct kinds of uncertainty remains an open question, as does their joint impact on judgments of overall uncertainty. This paper reviews recent advances in our understanding of human judgment and decision making when both ambiguity and conflict are present, and presents two types of testable models of judgments under conflict and ambiguity. The first type concerns estimate-pooling to arrive at "best" probability estimates. The second type is models of subjective assessments of conflict and ambiguity. These models are developed for dealing with both described and experienced information. A framework for testing these models in the described-information setting is presented, including a reanalysis of a multi-nation data-set to test best-estimate models, and a study of participants' assessments of conflict, ambiguity, and overall uncertainty reported by Smithson (2013). A framework for research in the experienced-information setting is then developed, that differs substantially from extant paradigms in the literature. This framework yields new models of "best" estimates and perceived conflict. The paper concludes with specific suggestions for future research on judgment and decision making under conflict and ambiguity.

  20. No-signaling quantum key distribution: solution by linear programming

    NASA Astrophysics Data System (ADS)

    Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan

    2015-02-01

    We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.

  1. Cellular Automata Generalized To An Inferential System

    NASA Astrophysics Data System (ADS)

    Blower, David J.

    2007-11-01

    Stephen Wolfram popularized elementary one-dimensional cellular automata in his book, A New Kind of Science. Among many remarkable things, he proved that one of these cellular automata was a Universal Turing Machine. Such cellular automata can be interpreted in a different way by viewing them within the context of the formal manipulation rules from probability theory. Bayes's Theorem is the most famous of such formal rules. As a prelude, we recapitulate Jaynes's presentation of how probability theory generalizes classical logic using modus ponens as the canonical example. We emphasize the important conceptual standing of Boolean Algebra for the formal rules of probability manipulation and give an alternative demonstration augmenting and complementing Jaynes's derivation. We show the complementary roles played in arguments of this kind by Bayes's Theorem and joint probability tables. A good explanation for all of this is afforded by the expansion of any particular logic function via the disjunctive normal form (DNF). The DNF expansion is a useful heuristic emphasized in this exposition because such expansions point out where relevant 0s should be placed in the joint probability tables for logic functions involving any number of variables. It then becomes a straightforward exercise to rely on Boolean Algebra, Bayes's Theorem, and joint probability tables in extrapolating to Wolfram's cellular automata. Cellular automata are seen as purely deductive systems, just like classical logic, which probability theory is then able to generalize. Thus, any uncertainties which we might like to introduce into the discussion about cellular automata are handled with ease via the familiar inferential path. Most importantly, the difficult problem of predicting what cellular automata will do in the far future is treated like any inferential prediction problem.

  2. Crystal plasticity finite element analysis of deformation behaviour in SAC305 solder joint

    NASA Astrophysics Data System (ADS)

    Darbandi, Payam

    Due to the awareness of the potential health hazards associated with the toxicity of lead (Pb), actions have been taken to eliminate or reduce the use of Pb in consumer products. Among those, tin (Sn) solders have been used for the assembly of electronic systems. Anisotropy is of significant importance in all structural metals, but this characteristic is unusually strong in Sn, making Sn based solder joints one of the best examples of the influence of anisotropy. The effect of anisotropy arising from the crystal structure of tin and large grain microstructure on the microstructure and the evolution of constitutive responses of microscale SAC305 solder joints is investigated. Insights into the effects of key microstructural features and dominant plastic deformation mechanisms influencing the measured relative activity of slip systems in SAC305 are obtained from a combination of optical microscopy, orientation imaging microscopy (OIM), slip plane trace analysis and crystal plasticity finite element (CPFE) modeling. Package level SAC305 specimens were subjected to shear deformation in sequential steps and characterized using optical microscopy and OIM to identify the activity of slip systems. X-ray micro Laue diffraction and high energy monochromatic X-ray beam were employed to characterize the joint scale tensile samples to provide necessary information to be able to compare and validate the CPFE model. A CPFE model was developed that can account for relative ease of activating slip systems in SAC305 solder based upon the statistical estimation based on correlation between the critical resolved shear stress and the probability of activating various slip systems. The results from simulations show that the CPFE model developed using the statistical analysis of activity of slip system not only can satisfy the requirements associated with kinematic of plastic deformation in crystal coordinate systems (activity of slip systems) and global coordinate system (shape changes) but also this model is able to predict the evolution of stress in joint level SAC305 sample.

  3. Throughput assurance of wireless body area networks coexistence based on stochastic geometry

    PubMed Central

    Wang, Yinglong; Shu, Minglei; Wu, Shangbin

    2017-01-01

    Wireless body area networks (WBANs) are expected to influence the traditional medical model by assisting caretakers with health telemonitoring. Within WBANs, the transmit power of the nodes should be as small as possible owing to their limited energy capacity but should be sufficiently large to guarantee the quality of the signal at the receiving nodes. When multiple WBANs coexist in a small area, the communication reliability and overall throughput can be seriously affected due to resource competition and interference. We show that the total network throughput largely depends on the WBANs distribution density (λp), transmit power of their nodes (Pt), and their carrier-sensing threshold (γ). Using stochastic geometry, a joint carrier-sensing threshold and power control strategy is proposed to meet the demand of coexisting WBANs based on the IEEE 802.15.4 standard. Given different network distributions and carrier-sensing thresholds, the proposed strategy derives a minimum transmit power according to varying surrounding environment. We obtain expressions for transmission success probability and throughput adopting this strategy. Using numerical examples, we show that joint carrier-sensing thresholds and transmit power strategy can effectively improve the overall system throughput and reduce interference. Additionally, this paper studies the effects of a guard zone on the throughput using a Matern hard-core point process (HCPP) type II model. Theoretical analysis and simulation results show that the HCPP model can increase the success probability and throughput of networks. PMID:28141841

  4. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions.

    PubMed

    Potter, Gail E; Smieszek, Timo; Sailer, Kerstin

    2015-09-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0-5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models.

  5. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions

    PubMed Central

    Potter, Gail E.; Smieszek, Timo; Sailer, Kerstin

    2015-01-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0–5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models. PMID:26634122

  6. Improving Constraints on Climate System Properties withAdditional Data and New Statistical and Sampling Methods

    NASA Astrophysics Data System (ADS)

    Forest, C. E.; Libardoni, A. G.; Sokolov, A. P.; Monier, E.

    2017-12-01

    We use the updated MIT Earth System Model (MESM) to derive the joint probability distribution function for Equilibrium Climate sensitivity (S), an effective heat diffusivity (Kv), and the net aerosol forcing (Faer). Using a new 1800-member ensemble of MESM runs, we derive PDFs by comparing model outputs against historical observations of surface temperature and global mean ocean heat content. We focus on how changes in (i) the MESM model, (ii) recent surface temperature and ocean heat content observations, and (iii) estimates of internal climate variability will all contribute to uncertainties. We show that estimates of S increase and Faer is less negative. These shifts result partly from new model forcing inputs but also from including recent temperature records that lead to higher values of S and Kv. We show that the parameter distributions are sensitive to the internal variability in the climate system. When considering these factors, we derive our best estimate for the joint probability distribution for the climate system properties. We estimate the 90-percent confidence intervals for climate sensitivity as 2.7-5.4 oC with a mode of 3.5 oC, for Kv as 1.9-23.0 cm2 s-1 with a mode of 4.41 cm2 s-1, and for Faer as -0.4 - -0.04 Wm-2 with a mode of -0.25 Wm-2. Lastly, we estimate TCR to be between 1.4 and 2.1 oC with a mode of 1.8 oC.

  7. Multi-objective optimization for evaluation of simulation fidelity for precipitation, cloudiness and insolation in regional climate models

    NASA Astrophysics Data System (ADS)

    Lee, H.

    2016-12-01

    Precipitation is one of the most important climate variables that are taken into account in studying regional climate. Nevertheless, how precipitation will respond to a changing climate and even its mean state in the current climate are not well represented in regional climate models (RCMs). Hence, comprehensive and mathematically rigorous methodologies to evaluate precipitation and related variables in multiple RCMs are required. The main objective of the current study is to evaluate the joint variability of climate variables related to model performance in simulating precipitation and condense multiple evaluation metrics into a single summary score. We use multi-objective optimization, a mathematical process that provides a set of optimal tradeoff solutions based on a range of evaluation metrics, to characterize the joint representation of precipitation, cloudiness and insolation in RCMs participating in the North American Regional Climate Change Assessment Program (NARCCAP) and Coordinated Regional Climate Downscaling Experiment-North America (CORDEX-NA). We also leverage ground observations, NASA satellite data and the Regional Climate Model Evaluation System (RCMES). Overall, the quantitative comparison of joint probability density functions between the three variables indicates that performance of each model differs markedly between sub-regions and also shows strong seasonal dependence. Because of the large variability across the models, it is important to evaluate models systematically and make future projections using only models showing relatively good performance. Our results indicate that the optimized multi-model ensemble always shows better performance than the arithmetic ensemble mean and may guide reliable future projections.

  8. Effect of stress concentration on the fatigue strength of A7N01S-T5 welded joints

    NASA Astrophysics Data System (ADS)

    Zhang, Mingyue; Gou, Guoqing; Hang, Zongqiu; Chen, Hui

    2017-07-01

    Stress concentration is a key factor that affects the fatigue strength of welded joints. In this study, the fatigue strengths of butt joints with and without the weld reinforcement were tested to quantify the effect of stress concentration. The fatigue strength of the welded joints was measured with a high-frequency fatigue machine. The P-S-N curves were drawn under different confidence levels and failure probabilities. The results show that butt joints with the weld reinforcement have much lower fatigue strength than joints without the weld reinforcement. Therefore, stress concentration introduced by the weld reinforcement should be controlled.

  9. A Proposed Approach for Joint Modeling of the Longitudinal and Time-To-Event Data in Heterogeneous Populations: An Application to HIV/AIDS's Disease.

    PubMed

    Roustaei, Narges; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf

    2018-01-01

    In recent years, the joint models have been widely used for modeling the longitudinal and time-to-event data simultaneously. In this study, we proposed an approach (PA) to study the longitudinal and survival outcomes simultaneously in heterogeneous populations. PA relaxes the assumption of conditional independence (CI). We also compared PA with joint latent class model (JLCM) and separate approach (SA) for various sample sizes (150, 300, and 600) and different association parameters (0, 0.2, and 0.5). The average bias of parameters estimation (AB-PE), average SE of parameters estimation (ASE-PE), and coverage probability of the 95% confidence interval (CP) among the three approaches were compared. In most cases, when the sample sizes increased, AB-PE and ASE-PE decreased for the three approaches, and CP got closer to the nominal level of 0.95. When there was a considerable association, PA in comparison with SA and JLCM performed better in the sense that PA had the smallest AB-PE and ASE-PE for the longitudinal submodel among the three approaches for the small and moderate sample sizes. Moreover, JLCM was desirable for the none-association and the large sample size. Finally, the evaluated approaches were applied on a real HIV/AIDS dataset for validation, and the results were compared.

  10. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  11. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach

    PubMed Central

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2018-01-01

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591

  12. Probability of spacesuit-induced fingernail trauma is associated with hand circumference.

    PubMed

    Opperman, Roedolph A; Waldie, James M A; Natapoff, Alan; Newman, Dava J; Jones, Jeffrey A

    2010-10-01

    A significant number of astronauts sustain hand injuries during extravehicular activity training and operations. These hand injuries have been known to cause fingernail delamination (onycholysis) that requires medical intervention. This study investigated correlations between the anthropometrics of the hand and susceptibility to injury. The analysis explored the hypothesis that crewmembers with a high finger-to-hand size ratio are more likely to experience injuries. A database of 232 crewmembers' injury records and anthropometrics was sourced from NASA Johnson Space Center. No significant effect of finger-to-hand size was found on the probability of injury, but circumference and width of the metacarpophalangeal (MCP) joint were found to be significantly associated with injuries by the Kruskal-Wallis test. A multivariate logistic regression showed that hand circumference is the dominant effect on the likelihood of onycholysis. Male crewmembers with a hand circumference > 22.86 cm (9") have a 19.6% probability of finger injury, but those with hand circumferences < or = 22.86 cm (9") only have a 5.6% chance of injury. Findings were similar for female crewmembers. This increased probability may be due to constriction at large MCP joints by the current NASA Phase VI glove. Constriction may lead to occlusion of vascular flow to the fingers that may increase the chances of onycholysis. Injury rates are lower on gloves such as the superseded series 4000 and the Russian Orlan that provide more volume for the MCP joint. This suggests that we can reduce onycholysis by modifying the design of the current gloves at the MCP joint.

  13. A robust design mark-resight abundance estimator allowing heterogeneity in resighting probabilities

    USGS Publications Warehouse

    McClintock, B.T.; White, Gary C.; Burnham, K.P.

    2006-01-01

    This article introduces the beta-binomial estimator (BBE), a closed-population abundance mark-resight model combining the favorable qualities of maximum likelihood theory and the allowance of individual heterogeneity in sighting probability (p). The model may be parameterized for a robust sampling design consisting of multiple primary sampling occasions where closure need not be met between primary occasions. We applied the model to brown bear data from three study areas in Alaska and compared its performance to the joint hypergeometric estimator (JHE) and Bowden's estimator (BOWE). BBE estimates suggest heterogeneity levels were non-negligible and discourage the use of JHE for these data. Compared to JHE and BOWE, confidence intervals were considerably shorter for the AICc model-averaged BBE. To evaluate the properties of BBE relative to JHE and BOWE when sample sizes are small, simulations were performed with data from three primary occasions generated under both individual heterogeneity and temporal variation in p. All models remained consistent regardless of levels of variation in p. In terms of precision, the AICc model-averaged BBE showed advantages over JHE and BOWE when heterogeneity was present and mean sighting probabilities were similar between primary occasions. Based on the conditions examined, BBE is a reliable alternative to JHE or BOWE and provides a framework for further advances in mark-resight abundance estimation. ?? 2006 American Statistical Association and the International Biometric Society.

  14. Under-reported data analysis with INAR-hidden Markov chains.

    PubMed

    Fernández-Fontelo, Amanda; Cabaña, Alejandra; Puig, Pedro; Moriña, David

    2016-11-20

    In this work, we deal with correlated under-reported data through INAR(1)-hidden Markov chain models. These models are very flexible and can be identified through its autocorrelation function, which has a very simple form. A naïve method of parameter estimation is proposed, jointly with the maximum likelihood method based on a revised version of the forward algorithm. The most-probable unobserved time series is reconstructed by means of the Viterbi algorithm. Several examples of application in the field of public health are discussed illustrating the utility of the models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. A Process-Based Transport-Distance Model of Aeolian Transport

    NASA Astrophysics Data System (ADS)

    Naylor, A. K.; Okin, G.; Wainwright, J.; Parsons, A. J.

    2017-12-01

    We present a new approach to modeling aeolian transport based on transport distance. Particle fluxes are based on statistical probabilities of particle detachment and distributions of transport lengths, which are functions of particle size classes. A computational saltation model is used to simulate transport distances over a variety of sizes. These are fit to an exponential distribution, which has the advantages of computational economy, concordance with current field measurements, and a meaningful relationship to theoretical assumptions about mean and median particle transport distance. This novel approach includes particle-particle interactions, which are important for sustaining aeolian transport and dust emission. Results from this model are compared with results from both bulk- and particle-sized-specific transport equations as well as empirical wind tunnel studies. The transport-distance approach has been successfully used for hydraulic processes, and extending this methodology from hydraulic to aeolian transport opens up the possibility of modeling joint transport by wind and water using consistent physics. Particularly in nutrient-limited environments, modeling the joint action of aeolian and hydraulic transport is essential for understanding the spatial distribution of biomass across landscapes and how it responds to climatic variability and change.

  16. Modeling Multiple Risks: Hidden Domain of Attraction

    DTIC Science & Technology

    2012-01-01

    improve joint tail probability approximation but the deficiency can be remedied by a more general approach which we call hidden domain of attraction ( HDA ...HRV is a special case of HDA . If the distribution of X does not have MRV but (1.2) still holds, we may retrieve the MRV setup by transforming the...potential advantage in some circumstances of the notion of HDA is that it does not require that we transform components. Performing such transformations on

  17. Experimental non-classicality of an indivisible quantum system.

    PubMed

    Lapkiewicz, Radek; Li, Peizhe; Schaeff, Christoph; Langford, Nathan K; Ramelow, Sven; Wieśniak, Marcin; Zeilinger, Anton

    2011-06-22

    In contrast to classical physics, quantum theory demands that not all properties can be simultaneously well defined; the Heisenberg uncertainty principle is a manifestation of this fact. Alternatives have been explored--notably theories relying on joint probability distributions or non-contextual hidden-variable models, in which the properties of a system are defined independently of their own measurement and any other measurements that are made. Various deep theoretical results imply that such theories are in conflict with quantum mechanics. Simpler cases demonstrating this conflict have been found and tested experimentally with pairs of quantum bits (qubits). Recently, an inequality satisfied by non-contextual hidden-variable models and violated by quantum mechanics for all states of two qubits was introduced and tested experimentally. A single three-state system (a qutrit) is the simplest system in which such a contradiction is possible; moreover, the contradiction cannot result from entanglement between subsystems, because such a three-state system is indivisible. Here we report an experiment with single photonic qutrits which provides evidence that no joint probability distribution describing the outcomes of all possible measurements--and, therefore, no non-contextual theory--can exist. Specifically, we observe a violation of the Bell-type inequality found by Klyachko, Can, Binicioğlu and Shumovsky. Our results illustrate a deep incompatibility between quantum mechanics and classical physics that cannot in any way result from entanglement.

  18. Probability distribution of haplotype frequencies under the two-locus Wright-Fisher model by diffusion approximation.

    PubMed

    Boitard, Simon; Loisel, Patrice

    2007-05-01

    The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.

  19. Periprosthetic Joint Infections: Clinical and Bench Research

    PubMed Central

    Legout, Laurence; Senneville, Eric

    2013-01-01

    Prosthetic joint infection is a devastating complication with high morbidity and substantial cost. The incidence is low but probably underestimated. Despite a significant basic and clinical research in this field, many questions concerning the definition of prosthetic infection as well the diagnosis and the management of these infections remained unanswered. We review the current literature about the new diagnostic methods, the management and the prevention of prosthetic joint infections. PMID:24288493

  20. Modeling Array Stations in SIG-VISA

    NASA Astrophysics Data System (ADS)

    Ding, N.; Moore, D.; Russell, S.

    2013-12-01

    We add support for array stations to SIG-VISA, a system for nuclear monitoring using probabilistic inference on seismic signals. Array stations comprise a large portion of the IMS network; they can provide increased sensitivity and more accurate directional information compared to single-component stations. Our existing model assumed that signals were independent at each station, which is false when lots of stations are close together, as in an array. The new model removes that assumption by jointly modeling signals across array elements. This is done by extending our existing Gaussian process (GP) regression models, also known as kriging, from a 3-dimensional single-component space of events to a 6-dimensional space of station-event pairs. For each array and each event attribute (including coda decay, coda height, amplitude transfer and travel time), we model the joint distribution across array elements using a Gaussian process that learns the correlation lengthscale across the array, thereby incorporating information of array stations into the probabilistic inference framework. To evaluate the effectiveness of our model, we perform ';probabilistic beamforming' on new events using our GP model, i.e., we compute the event azimuth having highest posterior probability under the model, conditioned on the signals at array elements. We compare the results from our probabilistic inference model to the beamforming currently performed by IMS station processing.

  1. Minimum resolvable power contrast model

    NASA Astrophysics Data System (ADS)

    Qian, Shuai; Wang, Xia; Zhou, Jingjing

    2018-01-01

    Signal-to-noise ratio and MTF are important indexs to evaluate the performance of optical systems. However,whether they are used alone or joint assessment cannot intuitively describe the overall performance of the system. Therefore, an index is proposed to reflect the comprehensive system performance-Minimum Resolvable Radiation Performance Contrast (MRP) model. MRP is an evaluation model without human eyes. It starts from the radiance of the target and the background, transforms the target and background into the equivalent strips,and considers attenuation of the atmosphere, the optical imaging system, and the detector. Combining with the signal-to-noise ratio and the MTF, the Minimum Resolvable Radiation Performance Contrast is obtained. Finally the detection probability model of MRP is given.

  2. Multivariate Markov chain modeling for stock markets

    NASA Astrophysics Data System (ADS)

    Maskawa, Jun-ichi

    2003-06-01

    We study a multivariate Markov chain model as a stochastic model of the price changes of portfolios in the framework of the mean field approximation. The time series of price changes are coded into the sequences of up and down spins according to their signs. We start with the discussion for small portfolios consisting of two stock issues. The generalization of our model to arbitrary size of portfolio is constructed by a recurrence relation. The resultant form of the joint probability of the stationary state coincides with Gibbs measure assigned to each configuration of spin glass model. Through the analysis of actual portfolios, it has been shown that the synchronization of the direction of the price changes is well described by the model.

  3. Exact joint density-current probability function for the asymmetric exclusion process.

    PubMed

    Depken, Martin; Stinchcombe, Robin

    2004-07-23

    We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society

  4. Model-assisted probability of detection of flaws in aluminum blocks using polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming

    2018-04-01

    Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.

  5. Technology-Enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    ERIC Educational Resources Information Center

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like "What is the chance of event A occurring, given that event B was observed?" This generic question arises in discussions of many intriguing scientific questions such as "What is the probability that an adolescent weighs between 120 and 140 pounds given that…

  6. Color object detection using spatial-color joint probability functions.

    PubMed

    Luo, Jiebo; Crandall, David

    2006-06-01

    Object detection in unconstrained images is an important image understanding problem with many potential applications. There has been little success in creating a single algorithm that can detect arbitrary objects in unconstrained images; instead, algorithms typically must be customized for each specific object. Consequently, it typically requires a large number of exemplars (for rigid objects) or a large amount of human intuition (for nonrigid objects) to develop a robust algorithm. We present a robust algorithm designed to detect a class of compound color objects given a single model image. A compound color object is defined as having a set of multiple, particular colors arranged spatially in a particular way, including flags, logos, cartoon characters, people in uniforms, etc. Our approach is based on a particular type of spatial-color joint probability function called the color edge co-occurrence histogram. In addition, our algorithm employs perceptual color naming to handle color variation, and prescreening to limit the search scope (i.e., size and location) for the object. Experimental results demonstrated that the proposed algorithm is insensitive to object rotation, scaling, partial occlusion, and folding, outperforming a closely related algorithm based on color co-occurrence histograms by a decisive margin.

  7. Individual wealth-based selection supports cooperation in spatial public goods games

    NASA Astrophysics Data System (ADS)

    Chen, Xiaojie; Szolnoki, Attila

    2016-09-01

    In a social dilemma game group members are allowed to decide if they contribute to the joint venture or not. As a consequence, defectors, who do not invest but only enjoy the mutual benefit, prevail and the system evolves onto the tragedy of the common state. This unfortunate scenario can be avoided if participation is not obligatory but only happens with a given probability. But what if we also consider a player’s individual wealth when to decide about participation? To address this issue we propose a model in which the probabilistic participation in the public goods game is combined with a conditional investment mode that is based on individual wealth: if a player’s wealth exceeds a threshold value then it is qualified and can participate in the joint venture. Otherwise, the participation is forbidden in the investment interactions. We show that if only probabilistic participation is considered, spatially structured populations cannot support cooperation better than well-mixed populations where full defection state can also be avoided for small participation probabilities. By adding the wealth-based criterion of participation, however, structured populations are capable to augment network reciprocity relevantly and allow cooperator strategy to dominate in a broader parameter interval.

  8. Combining Gravitational Wave Events with their Electromagnetic Counterparts: A Realistic Joint False-Alarm Rate

    NASA Astrophysics Data System (ADS)

    Ackley, Kendall; Eikenberry, Stephen; Klimenko, Sergey; LIGO Team

    2017-01-01

    We present a false-alarm rate for a joint detection of gravitational wave (GW) events and associated electromagnetic (EM) counterparts for Advanced LIGO and Virgo (LV) observations during the first years of operation. Using simulated GW events and their recostructed probability skymaps, we tile over the error regions using sets of archival wide-field telescope survey images and recover the number of astrophysical transients to be expected during LV-EM followup. With the known GW event injection coordinates we inject artificial electromagnetic (EM) sources at that site based on theoretical and observational models on a one-to-one basis. We calculate the EM false-alarm probability using an unsupervised machine learning algorithm based on shapelet analysis which has shown to be a strong discriminator between astrophysical transients and image artifacts while reducing the set of transients to be manually vetted by five orders of magnitude. We also show the performance of our method in context with other machine-learned transient classification and reduction algorithms, showing comparability without the need for a large set of training data opening the possibility for next-generation telescopes to take advantage of this pipeline for LV-EM followup missions.

  9. On probability-possibility transformations

    NASA Technical Reports Server (NTRS)

    Klir, George J.; Parviz, Behzad

    1992-01-01

    Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.

  10. Colonization and extinction in dynamic habitats: an occupancy approach for a Great Plains stream fish assemblage.

    PubMed

    Falke, Jeffrey A; Bailey, Larissa L; Fausch, Kurt D; Bestgen, Kevin R

    2012-04-01

    Despite the importance of habitat in determining species distribution and persistence, habitat dynamics are rarely modeled in studies of metapopulations. We used an integrated habitat-occupancy model to simultaneously quantify habitat change, site fidelity, and local colonization and extinction rates for larvae of a suite of Great Plains stream fishes in the Arikaree River, eastern Colorado, USA, across three years. Sites were located along a gradient of flow intermittency and groundwater connectivity. Hydrology varied across years: the first and third being relatively wet and the second dry. Despite hydrologic variation, our results indicated that site suitability was random from one year to the next. Occupancy probabilities were also independent of previous habitat and occupancy state for most species, indicating little site fidelity. Climate and groundwater connectivity were important drivers of local extinction and colonization, but the importance of groundwater differed between periods. Across species, site extinction probabilities were highest during the transition from wet to dry conditions (range: 0.52-0.98), and the effect of groundwater was apparent with higher extinction probabilities for sites not fed by groundwater. Colonization probabilities during this period were relatively low for both previously dry sites (range: 0.02-0.38) and previously wet sites (range: 0.02-0.43). In contrast, no sites dried or remained dry during the transition from dry to wet conditions, yielding lower but still substantial extinction probabilities (range: 0.16-0.63) and higher colonization probabilities (range: 0.06-0.86), with little difference among sites with and without groundwater. This approach of jointly modeling both habitat change and species occupancy will likely be useful to incorporate effects of dynamic habitat on metapopulation processes and to better inform appropriate conservation actions.

  11. Stochastic optimal operation of reservoirs based on copula functions

    NASA Astrophysics Data System (ADS)

    Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen

    2018-02-01

    Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.

  12. Bayesian network models for error detection in radiotherapy plans

    NASA Astrophysics Data System (ADS)

    Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.

    2015-04-01

    The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.

  13. Bayesian Inference of High-Dimensional Dynamical Ocean Models

    NASA Astrophysics Data System (ADS)

    Lin, J.; Lermusiaux, P. F. J.; Lolla, S. V. T.; Gupta, A.; Haley, P. J., Jr.

    2015-12-01

    This presentation addresses a holistic set of challenges in high-dimension ocean Bayesian nonlinear estimation: i) predict the probability distribution functions (pdfs) of large nonlinear dynamical systems using stochastic partial differential equations (PDEs); ii) assimilate data using Bayes' law with these pdfs; iii) predict the future data that optimally reduce uncertainties; and (iv) rank the known and learn the new model formulations themselves. Overall, we allow the joint inference of the state, equations, geometry, boundary conditions and initial conditions of dynamical models. Examples are provided for time-dependent fluid and ocean flows, including cavity, double-gyre and Strait flows with jets and eddies. The Bayesian model inference, based on limited observations, is illustrated first by the estimation of obstacle shapes and positions in fluid flows. Next, the Bayesian inference of biogeochemical reaction equations and of their states and parameters is presented, illustrating how PDE-based machine learning can rigorously guide the selection and discovery of complex ecosystem models. Finally, the inference of multiscale bottom gravity current dynamics is illustrated, motivated in part by classic overflows and dense water formation sites and their relevance to climate monitoring and dynamics. This is joint work with our MSEAS group at MIT.

  14. Joint modeling and registration of cell populations in cohorts of high-dimensional flow cytometric data.

    PubMed

    Pyne, Saumyadipta; Lee, Sharon X; Wang, Kui; Irish, Jonathan; Tamayo, Pablo; Nazaire, Marc-Danie; Duong, Tarn; Ng, Shu-Kay; Hafler, David; Levy, Ronald; Nolan, Garry P; Mesirov, Jill; McLachlan, Geoffrey J

    2014-01-01

    In biomedical applications, an experimenter encounters different potential sources of variation in data such as individual samples, multiple experimental conditions, and multivariate responses of a panel of markers such as from a signaling network. In multiparametric cytometry, which is often used for analyzing patient samples, such issues are critical. While computational methods can identify cell populations in individual samples, without the ability to automatically match them across samples, it is difficult to compare and characterize the populations in typical experiments, such as those responding to various stimulations or distinctive of particular patients or time-points, especially when there are many samples. Joint Clustering and Matching (JCM) is a multi-level framework for simultaneous modeling and registration of populations across a cohort. JCM models every population with a robust multivariate probability distribution. Simultaneously, JCM fits a random-effects model to construct an overall batch template--used for registering populations across samples, and classifying new samples. By tackling systems-level variation, JCM supports practical biomedical applications involving large cohorts. Software for fitting the JCM models have been implemented in an R package EMMIX-JCM, available from http://www.maths.uq.edu.au/~gjm/mix_soft/EMMIX-JCM/.

  15. Historical and future drought in Bangladesh using copula-based bivariate regional frequency analysis

    NASA Astrophysics Data System (ADS)

    Mortuza, Md Rubayet; Moges, Edom; Demissie, Yonas; Li, Hong-Yi

    2018-02-01

    The study aims at regional and probabilistic evaluation of bivariate drought characteristics to assess both the past and future drought duration and severity in Bangladesh. The procedures involve applying (1) standardized precipitation index to identify drought duration and severity, (2) regional frequency analysis to determine the appropriate marginal distributions for both duration and severity, (3) copula model to estimate the joint probability distribution of drought duration and severity, and (4) precipitation projections from multiple climate models to assess future drought trends. Since drought duration and severity in Bangladesh are often strongly correlated and do not follow same marginal distributions, the joint and conditional return periods of droughts are characterized using the copula-based joint distribution. The country is divided into three homogeneous regions using Fuzzy clustering and multivariate discordancy and homogeneity measures. For given severity and duration values, the joint return periods for a drought to exceed both values are on average 45% larger, while to exceed either value are 40% less than the return periods from the univariate frequency analysis, which treats drought duration and severity independently. These suggest that compared to the bivariate drought frequency analysis, the standard univariate frequency analysis under/overestimate the frequency and severity of droughts depending on how their duration and severity are related. Overall, more frequent and severe droughts are observed in the west side of the country. Future drought trend based on four climate models and two scenarios showed the possibility of less frequent drought in the future (2020-2100) than in the past (1961-2010).

  16. Probabilistic graphs as a conceptual and computational tool in hydrology and water management

    NASA Astrophysics Data System (ADS)

    Schoups, Gerrit

    2014-05-01

    Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.

  17. Time- and temperature-dependent failures of a bonded joint

    NASA Astrophysics Data System (ADS)

    Sihn, Sangwook

    This dissertation summarizes my study of time- and temperature-dependent behavior of a tubular lap bonded joint to provide a design methodology for windmill blade structures. The bonded joint is between a cast-iron rod and a GFRP composite pipe. The adhesive material is an epoxy containing chopped glass fibers. We proposed a new fabrication method to make concentric and void-less specimens of the tubular joint with a thick adhesive bondline to stimulate the root bond of a blade. The thick bondline facilitates the joint assembly of actual blades. For a better understanding of the behavior of the bonded joint, we studied viscoelastic behavior of the adhesive materials by measuring creep compliance at several temperatures during loading period. We observed that the creep compliance depends highly on the period of loading and the temperature. We applied time-temperature equivalence to the creep compliance of the adhesive material to obtain time-temperature shift factors. We also performed constant-rate of monotonically increased uniaxial tensile tests to measure static strength of the tubular lap joint at several temperatures and different strain-rates. We observed two failure modes from load-deflection curves and failed specimens. One is the brittle mode, which was caused by weakness of the interfacial strength occurring at low temperature and short period of loading. The other is the ductile mode, which was caused by weakness of the adhesive material at high temperature and long period of loading. Transition from the brittle to the ductile mode appeared as the temperature or the loading period increased. We also performed tests under uniaxial tensile-tensile cyclic loadings to measure fatigue strength of the bonded joint at several temperatures, frequencies and stress ratios. The fatigue data are analyzed statistically by applying the residual strength degradation model to calculate statistical distribution of the fatigue life. Combining the time-temperature equivalence and the residual strength degradation model enables us to estimate the fatigue life of the bonded joint at different load levels, frequencies and temperatures with a certain probability. A numerical example shows how to apply the life estimation method to a structure subjected to a random load history by rainflow cycle counting.

  18. Seabed roughness parameters from joint backscatter and reflection inversion at the Malta Plateau.

    PubMed

    Steininger, Gavin; Holland, Charles W; Dosso, Stan E; Dettmer, Jan

    2013-09-01

    This paper presents estimates of seabed roughness and geoacoustic parameters and uncertainties on the Malta Plateau, Mediterranean Sea, by joint Bayesian inversion of mono-static backscatter and spherical wave reflection-coefficient data. The data are modeled using homogeneous fluid sediment layers overlying an elastic basement. The scattering model assumes a randomly rough water-sediment interface with a von Karman roughness power spectrum. Scattering and reflection data are inverted simultaneously using a population of interacting Markov chains to sample roughness and geoacoustic parameters as well as residual error parameters. Trans-dimensional sampling is applied to treat the number of sediment layers and the order (zeroth or first) of an autoregressive error model (to represent potential residual correlation) as unknowns. Results are considered in terms of marginal posterior probability profiles and distributions, which quantify the effective data information content to resolve scattering/geoacoustic structure. Results indicate well-defined scattering (roughness) parameters in good agreement with existing measurements, and a multi-layer sediment profile over a high-speed (elastic) basement, consistent with independent knowledge of sand layers over limestone.

  19. Tackling missing radiographic progression data: multiple imputation technique compared with inverse probability weights and complete case analysis.

    PubMed

    Descalzo, Miguel Á; Garcia, Virginia Villaverde; González-Alvaro, Isidoro; Carbonell, Jordi; Balsa, Alejandro; Sanmartí, Raimon; Lisbona, Pilar; Hernandez-Barrera, Valentín; Jiménez-Garcia, Rodrigo; Carmona, Loreto

    2013-02-01

    To describe the results of different statistical ways of addressing radiographic outcome affected by missing data--multiple imputation technique, inverse probability weights and complete case analysis--using data from an observational study. A random sample of 96 RA patients was selected for a follow-up study in which radiographs of hands and feet were scored. Radiographic progression was tested by comparing the change in the total Sharp-van der Heijde radiographic score (TSS) and the joint erosion score (JES) from baseline to the end of the second year of follow-up. MI technique, inverse probability weights in weighted estimating equation (WEE) and CC analysis were used to fit a negative binomial regression. Major predictors of radiographic progression were JES and joint space narrowing (JSN) at baseline, together with baseline disease activity measured by DAS28 for TSS and MTX use for JES. Results from CC analysis show larger coefficients and s.e.s compared with MI and weighted techniques. The results from the WEE model were quite in line with those of MI. If it seems plausible that CC or MI analysis may be valid, then MI should be preferred because of its greater efficiency. CC analysis resulted in inefficient estimates or, translated into non-statistical terminology, could guide us into inaccurate results and unwise conclusions. The methods discussed here will contribute to the use of alternative approaches for tackling missing data in observational studies.

  20. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion

    PubMed Central

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-01-01

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278

  1. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.

    PubMed

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-08-31

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy.

  2. Probabilistic Simulation of Progressive Fracture in Bolted-Joint Composite Laminates

    NASA Technical Reports Server (NTRS)

    Minnetyan, L.; Singhal, S. N.; Chamis, C. C.

    1996-01-01

    This report describes computational methods to probabilistically simulate fracture in bolted composite structures. An innovative approach that is independent of stress intensity factors and fracture toughness was used to simulate progressive fracture. The effect of design variable uncertainties on structural damage was also quantified. A fast probability integrator assessed the scatter in the composite structure response before and after damage. Then the sensitivity of the response to design variables was computed. General-purpose methods, which are applicable to bolted joints in all types of structures and in all fracture processes-from damage initiation to unstable propagation and global structure collapse-were used. These methods were demonstrated for a bolted joint of a polymer matrix composite panel under edge loads. The effects of the fabrication process were included in the simulation of damage in the bolted panel. Results showed that the most effective way to reduce end displacement at fracture is to control both the load and the ply thickness. The cumulative probability for longitudinal stress in all plies was most sensitive to the load; in the 0 deg. plies it was very sensitive to ply thickness. The cumulative probability for transverse stress was most sensitive to the matrix coefficient of thermal expansion. In addition, fiber volume ratio and fiber transverse modulus both contributed significantly to the cumulative probability for the transverse stresses in all the plies.

  3. A Causal Model for Joint Evaluation of Placebo and Treatment-Specific Effects in Clinical Trials

    PubMed Central

    Zhang, Zhiwei; Kotz, Richard M.; Wang, Chenguang; Ruan, Shiling; Ho, Martin

    2014-01-01

    Summary Evaluation of medical treatments is frequently complicated by the presence of substantial placebo effects, especially on relatively subjective endpoints, and the standard solution to this problem is a randomized, double-blinded, placebo-controlled clinical trial. However, effective blinding does not guarantee that all patients have the same belief or mentality about which treatment they have received (or treatmentality, for brevity), making it difficult to interpret the usual intent-to-treat effect as a causal effect. We discuss the causal relationships among treatment, treatmentality and the clinical outcome of interest, and propose a causal model for joint evaluation of placebo and treatment-specific effects. The model highlights the importance of measuring and incorporating patient treatmentality and suggests that each treatment group should be considered a separate observational study with a patient's treatmentality playing the role of an uncontrolled exposure. This perspective allows us to adapt existing methods for dealing with confounding to joint estimation of placebo and treatment-specific effects using measured treatmentality data, commonly known as blinding assessment data. We first apply this approach to the most common type of blinding assessment data, which is categorical, and illustrate the methods using an example from asthma. We then propose that blinding assessment data can be collected as a continuous variable, specifically when a patient's treatmentality is measured as a subjective probability, and describe analytic methods for that case. PMID:23432119

  4. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    PubMed

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Salient in space, salient in time: Fixation probability predicts fixation duration during natural scene viewing.

    PubMed

    Einhäuser, Wolfgang; Nuthmann, Antje

    2016-09-01

    During natural scene viewing, humans typically attend and fixate selected locations for about 200-400 ms. Two variables characterize such "overt" attention: the probability of a location being fixated, and the fixation's duration. Both variables have been widely researched, but little is known about their relation. We use a two-step approach to investigate the relation between fixation probability and duration. In the first step, we use a large corpus of fixation data. We demonstrate that fixation probability (empirical salience) predicts fixation duration across different observers and tasks. Linear mixed-effects modeling shows that this relation is explained neither by joint dependencies on simple image features (luminance, contrast, edge density) nor by spatial biases (central bias). In the second step, we experimentally manipulate some of these features. We find that fixation probability from the corpus data still predicts fixation duration for this new set of experimental data. This holds even if stimuli are deprived of low-level images features, as long as higher level scene structure remains intact. Together, this shows a robust relation between fixation duration and probability, which does not depend on simple image features. Moreover, the study exemplifies the combination of empirical research on a large corpus of data with targeted experimental manipulations.

  6. HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA

    EPA Science Inventory

    Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...

  7. Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign

    PubMed Central

    2007-01-01

    Background Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds) that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for pairwise RNA structure prediction methods in a principled fashion. These constraints can reduce the computational and memory requirements of these methods while maintaining or improving their accuracy of structural prediction. This extends the practical reach of these methods to longer length sequences. The revised Dynalign code is freely available for download. PMID:17445273

  8. Interleaved Training and Training-Based Transmission Design for Hybrid Massive Antenna Downlink

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Jing, Yindi; Huang, Yongming; Yang, Luxi

    2018-06-01

    In this paper, we study the beam-based training design jointly with the transmission design for hybrid massive antenna single-user (SU) and multiple-user (MU) systems where outage probability is adopted as the performance measure. For SU systems, we propose an interleaved training design to concatenate the feedback and training procedures, thus making the training length adaptive to the channel realization. Exact analytical expressions are derived for the average training length and the outage probability of the proposed interleaved training. For MU systems, we propose a joint design for the beam-based interleaved training, beam assignment, and MU data transmissions. Two solutions for the beam assignment are provided with different complexity-performance tradeoff. Analytical results and simulations show that for both SU and MU systems, the proposed joint training and transmission designs achieve the same outage performance as the traditional full-training scheme but with significant saving in the training overhead.

  9. Stochastic modeling of turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Fox, R. O.; Hill, J. C.; Gao, F.; Moser, R. D.; Rogers, M. M.

    1992-01-01

    Direct numerical simulations of a single-step irreversible chemical reaction with non-premixed reactants in forced isotropic turbulence at R(sub lambda) = 63, Da = 4.0, and Sc = 0.7 were made using 128 Fourier modes to obtain joint probability density functions (pdfs) and other statistical information to parameterize and test a Fokker-Planck turbulent mixing model. Preliminary results indicate that the modeled gradient stretching term for an inert scalar is independent of the initial conditions of the scalar field. The conditional pdf of scalar gradient magnitudes is found to be a function of the scalar until the reaction is largely completed. Alignment of concentration gradients with local strain rate and other features of the flow were also investigated.

  10. An information-theoretic approach to the modeling and analysis of whole-genome bisulfite sequencing data.

    PubMed

    Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John

    2018-03-07

    DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of quantifying methylation stochasticity using concepts from information theory. By employing this methodology, substantial improvement of DNA methylation analysis can be achieved by effectively taking into account the massive amount of statistical information available in WGBS data, which is largely ignored by existing methods.

  11. Modeling Compound Flood Hazards in Coastal Embayments

    NASA Astrophysics Data System (ADS)

    Moftakhari, H.; Schubert, J. E.; AghaKouchak, A.; Luke, A.; Matthew, R.; Sanders, B. F.

    2017-12-01

    Coastal cities around the world are built on lowland topography adjacent to coastal embayments and river estuaries, where multiple factors threaten increasing flood hazards (e.g. sea level rise and river flooding). Quantitative risk assessment is required for administration of flood insurance programs and the design of cost-effective flood risk reduction measures. This demands a characterization of extreme water levels such as 100 and 500 year return period events. Furthermore, hydrodynamic flood models are routinely used to characterize localized flood level intensities (i.e., local depth and velocity) based on boundary forcing sampled from extreme value distributions. For example, extreme flood discharges in the U.S. are estimated from measured flood peaks using the Log-Pearson Type III distribution. However, configuring hydrodynamic models for coastal embayments is challenging because of compound extreme flood events: events caused by a combination of extreme sea levels, extreme river discharges, and possibly other factors such as extreme waves and precipitation causing pluvial flooding in urban developments. Here, we present an approach for flood risk assessment that coordinates multivariate extreme analysis with hydrodynamic modeling of coastal embayments. First, we evaluate the significance of correlation structure between terrestrial freshwater inflow and oceanic variables; second, this correlation structure is described using copula functions in unit joint probability domain; and third, we choose a series of compound design scenarios for hydrodynamic modeling based on their occurrence likelihood. The design scenarios include the most likely compound event (with the highest joint probability density), preferred marginal scenario and reproduced time series of ensembles based on Monte Carlo sampling of bivariate hazard domain. The comparison between resulting extreme water dynamics under the compound hazard scenarios explained above provides an insight to the strengths/weaknesses of each approach and helps modelers choose the appropriate scenario that best fit to the needs of their project. The proposed risk assessment approach can help flood hazard modeling practitioners achieve a more reliable estimate of risk, by cautiously reducing the dimensionality of the hazard analysis.

  12. Multivariate flood risk assessment: reinsurance perspective

    NASA Astrophysics Data System (ADS)

    Ghizzoni, Tatiana; Ellenrieder, Tobias

    2013-04-01

    For insurance and re-insurance purposes the knowledge of the spatial characteristics of fluvial flooding is fundamental. The probability of simultaneous flooding at different locations during one event and the associated severity and losses have to be estimated in order to assess premiums and for accumulation control (Probable Maximum Losses calculation). Therefore, the identification of a statistical model able to describe the multivariate joint distribution of flood events in multiple location is necessary. In this context, copulas can be viewed as alternative tools for dealing with multivariate simulations as they allow to formalize dependence structures of random vectors. An application of copula function for flood scenario generation is presented for Australia (Queensland, New South Wales and Victoria) where 100.000 possible flood scenarios covering approximately 15.000 years were simulated.

  13. Finding Bounded Rational Equilibria. Part 1; Iterative Focusing

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2004-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality characterizing all real-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. It has recently been shown that the same information theoretic mathematical structure, known as Probability Collectives (PC) underlies both issues. This relationship between statistical physics and game theory allows techniques and insights from the one field to be applied to the other. In particular, PC provides a formal model-independent definition of the degree of rationality of a player and of bounded rationality equilibria. This pair of papers extends previous work on PC by introducing new computational approaches to effectively find bounded rationality equilibria of common-interest (team) games.

  14. Probabilistic modelling of drought events in China via 2-dimensional joint copula

    NASA Astrophysics Data System (ADS)

    Ayantobo, Olusola O.; Li, Yi; Song, Songbai; Javed, Tehseen; Yao, Ning

    2018-04-01

    Probabilistic modelling of drought events is a significant aspect of water resources management and planning. In this study, popularly applied and several relatively new bivariate Archimedean copulas were employed to derive regional and spatial based copula models to appraise drought risk in mainland China over 1961-2013. Drought duration (Dd), severity (Ds), and peak (Dp), as indicated by Standardized Precipitation Evapotranspiration Index (SPEI), were extracted according to the run theory and fitted with suitable marginal distributions. The maximum likelihood estimation (MLE) and curve fitting method (CFM) were used to estimate the copula parameters of nineteen bivariate Archimedean copulas. Drought probabilities and return periods were analysed based on appropriate bivariate copula in sub-region I-VII and entire mainland China. The goodness-of-fit tests as indicated by the CFM showed that copula NN19 in sub-regions III, IV, V, VI and mainland China, NN20 in sub-region I and NN13 in sub-region VII are the best for modeling drought variables. Bivariate drought probability across mainland China is relatively high, and the highest drought probabilities are found mainly in the Northwestern and Southwestern China. Besides, the result also showed that different sub-regions might suffer varying drought risks. The drought risks as observed in Sub-region III, VI and VII, are significantly greater than other sub-regions. Higher probability of droughts of longer durations in the sub-regions also corresponds to shorter return periods with greater drought severity. These results may imply tremendous challenges for the water resources management in different sub-regions, particularly the Northwestern and Southwestern China.

  15. Improving the accurate assessment of a layered shear-wave velocity model using joint inversion of the effective Rayleigh wave and Love wave dispersion curves

    NASA Astrophysics Data System (ADS)

    Yin, X.; Xia, J.; Xu, H.

    2016-12-01

    Rayleigh and Love waves are two types of surface waves that travel along a free surface.Based on the assumption of horizontal layered homogenous media, Rayleigh-wave phase velocity can be defined as a function of frequency and four groups of earth parameters: P-wave velocity, SV-wave velocity, density and thickness of each layer. Unlike Rayleigh waves, Love-wave phase velocities of a layered homogenous earth model could be calculated using frequency and three groups of earth properties: SH-wave velocity, density, and thickness of each layer. Because the dispersion of Love waves is independent of P-wave velocities, Love-wave dispersion curves are much simpler than Rayleigh wave. The research of joint inversion methods of Rayleigh and Love dispersion curves is necessary. (1) This dissertation adopts the combinations of theoretical analysis and practical applications. In both lateral homogenous media and radial anisotropic media, joint inversion approaches of Rayleigh and Love waves are proposed to improve the accuracy of S-wave velocities.A 10% random white noise and a 20% random white noise are added to the synthetic dispersion curves to check out anti-noise ability of the proposed joint inversion method.Considering the influences of the anomalous layer, Rayleigh and Love waves are insensitive to those layers beneath the high-velocity layer or low-velocity layer and the high-velocity layer itself. Low sensitivities will give rise to high degree of uncertainties of the inverted S-wave velocities of these layers. Considering that sensitivity peaks of Rayleigh and Love waves separate at different frequency ranges, the theoretical analyses have demonstrated that joint inversion of these two types of waves would probably ameliorate the inverted model.The lack of surface-wave (Rayleigh or Love waves) dispersion data may lead to inaccuracy S-wave velocities through the single inversion of Rayleigh or Love waves, so this dissertation presents the joint inversion method of Rayleigh and Love waves which will improve the accuracy of S-wave velocities. Finally, a real-world example is applied to verify the accuracy and stability of the proposed joint inversion method. Keywords: Rayleigh wave; Love wave; Sensitivity analysis; Joint inversion method.

  16. A composition joint PDF method for the modeling of spray flames

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1995-01-01

    This viewgraph presentation discusses an extension of the probability density function (PDF) method to the modeling of spray flames to evaluate the limitations and capabilities of this method in the modeling of gas-turbine combustor flows. The comparisons show that the general features of the flowfield are correctly predicted by the present solution procedure. The present solution appears to provide a better representation of the temperature field, particularly, in the reverse-velocity zone. The overpredictions in the centerline velocity could be attributed to the following reasons: (1) the use of k-epsilon turbulence model is known to be less precise in highly swirling flows and (2) the swirl number used here is reported to be estimated rather than measured.

  17. The World According to de Finetti: On de Finetti's Theory of Probability and Its Application to Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Berkovitz, Joseph

    Bruno de Finetti is one of the founding fathers of the subjectivist school of probability, where probabilities are interpreted as rational degrees of belief. His work on the relation between the theorems of probability and rationality is among the corner stones of modern subjective probability theory. De Finetti maintained that rationality requires that degrees of belief be coherent, and he argued that the whole of probability theory could be derived from these coherence conditions. De Finetti's interpretation of probability has been highly influential in science. This paper focuses on the application of this interpretation to quantum mechanics. We argue that de Finetti held that the coherence conditions of degrees of belief in events depend on their verifiability. Accordingly, the standard coherence conditions of degrees of belief that are familiar from the literature on subjective probability only apply to degrees of belief in events which could (in principle) be jointly verified; and the coherence conditions of degrees of belief in events that cannot be jointly verified are weaker. While the most obvious explanation of de Finetti's verificationism is the influence of positivism, we argue that it could be motivated by the radical subjectivist and instrumental nature of probability in his interpretation; for as it turns out, in this interpretation it is difficult to make sense of the idea of coherent degrees of belief in, and accordingly probabilities of unverifiable events. We then consider the application of this interpretation to quantum mechanics, concentrating on the Einstein-Podolsky-Rosen experiment and Bell's theorem.

  18. Joint distraction attenuates osteoarthritis by reducing secondary inflammation, cartilage degeneration and subchondral bone aberrant change.

    PubMed

    Chen, Y; Sun, Y; Pan, X; Ho, K; Li, G

    2015-10-01

    Osteoarthritis (OA) is a progressive joint disorder. To date, there is not effective medical therapy. Joint distraction has given us hope for slowing down the OA progression. In this study, we investigated the benefits of joint distraction in OA rat model and the probable underlying mechanisms. OA was induced in the right knee joint of rats through anterior cruciate ligament transaction (ACLT) plus medial meniscus resection. The animals were randomized into three groups: two groups were treated with an external fixator for a subsequent 3 weeks, one with and one without joint distraction; and one group without external fixator as OA control. Serum interleukin-1β level was evaluated by ELISA; cartilage quality was assessed by histology examinations (gross appearance, Safranin-O/Fast green stain) and immunohistochemistry examinations (MMP13, Col X); subchondral bone aberrant changes was analyzed by micro-CT and immunohistochemistry (Nestin, Osterix) examinations. Characters of OA were present in the OA group, contrary to in general less severe damage after distraction treatment: firstly, IL-1β level was significantly decreased; secondly, cartilage degeneration was attenuated with lower histologic damage scores and the lower percentage of MMP13 or Col X positive chondrocytes; finally, subchondral bone abnormal change was attenuated, with reduced bone mineral density (BMD) and bone volume/total tissue volume (BV/TV) and the number of Nestin or Osterix positive cells in the subchondral bone. In the present study, we demonstrated that joint distraction reduced the level of secondary inflammation, cartilage degeneration and subchondral bone aberrant change, joint distraction may be a strategy for slowing OA progression. Copyright © 2015 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  19. Does performance of breast self-exams increase the probability of using mammography: evidence from Malaysia.

    PubMed

    Dunn, Richard A; Tan, Andrew; Samad, Ismail

    2010-01-01

    Breast self-examination (BSE) was evaluated to see if it is a significant predictor of mammography. The decisions of females above age 40 in Malaysia to test for breast cancer using BSE and mammography are jointly modeled using a bivariate probit so that unobserved attributes affecting mammography usage are also allowed to affect BSE. Data come from the Malaysia Non-Communicable Disease Surveillance-1, which was collected between September 2005 and February 2006. Having ever performed BSE is positively associated with having ever undergone mammography among Malay (adjusted OR=7.343, CI=2.686, 20.079) and Chinese (adjusted OR=3.466, CI=1.330, 9.031) females after adjusting for household income, education, marital status and residential location. Neither relationship is affected by jointly modelling the decision problem. Although the association is also positive for Indian females when mammography is modelled separately (adjusted OR=5.959, CI=1.546 - 22.970), the relationship is reversed when both decisions are modelled separately. De-emphasizing BSE in Malaysia may reduce mammography screening among a large proportion of the population. Previous work on the issue in developed countries may not apply to nations with limited resources.

  20. Rock Slide Risk Assessment: A Semi-Quantitative Approach

    NASA Astrophysics Data System (ADS)

    Duzgun, H. S. B.

    2009-04-01

    Rock slides can be better managed by systematic risk assessments. Any risk assessment methodology for rock slides involves identification of rock slide risk components, which are hazard, elements at risk and vulnerability. For a quantitative/semi-quantitative risk assessment for rock slides, a mathematical value the risk has to be computed and evaluated. The quantitative evaluation of risk for rock slides enables comparison of the computed risk with the risk of other natural and/or human-made hazards and providing better decision support and easier communication for the decision makers. A quantitative/semi-quantitative risk assessment procedure involves: Danger Identification, Hazard Assessment, Elements at Risk Identification, Vulnerability Assessment, Risk computation, Risk Evaluation. On the other hand, the steps of this procedure require adaptation of existing or development of new implementation methods depending on the type of landslide, data availability, investigation scale and nature of consequences. In study, a generic semi-quantitative risk assessment (SQRA) procedure for rock slides is proposed. The procedure has five consecutive stages: Data collection and analyses, hazard assessment, analyses of elements at risk and vulnerability and risk assessment. The implementation of the procedure for a single rock slide case is illustrated for a rock slope in Norway. Rock slides from mountain Ramnefjell to lake Loen are considered to be one of the major geohazards in Norway. Lake Loen is located in the inner part of Nordfjord in Western Norway. Ramnefjell Mountain is heavily jointed leading to formation of vertical rock slices with height between 400-450 m and width between 7-10 m. These slices threaten the settlements around Loen Valley and tourists visiting the fjord during summer season, as the released slides have potential of creating tsunami. In the past, several rock slides had been recorded from the Mountain Ramnefjell between 1905 and 1950. Among them, four of the slides caused formation of tsunami waves which washed up to 74 m above the lake level. Two of the slides resulted in many fatalities in the inner part of the Loen Valley as well as great damages. There are three predominant joint structures in Ramnefjell Mountain, which controls failure and the geometry of the slides. The first joint set is a foliation plane striking northeast-southwest and dipping 35˚ -40˚ to the east-southeast. The second and the third joint sets are almost perpendicular and parallel to the mountain side and scarp, respectively. These three joint sets form slices of rock columns with width ranging between 7-10 m and height of 400-450 m. It is stated that the joints in set II are opened between 1-2 m, which may bring about collection of water during heavy rainfall or snow melt causing the slices to be pressed out. It is estimated that water in the vertical joints both reduces the shear strength of sliding plane and causes reduction of normal stress on the sliding plane due to formation of uplift force. Hence rock slides in Ramnefjell mountain occur in plane failure mode. The quantitative evaluation of rock slide risk requires probabilistic analysis of rock slope stability and identification of consequences if the rock slide occurs. In this study failure probability of a rock slice is evaluated by first-order reliability method (FORM). Then in order to use the calculated probability of failure value (Pf) in risk analyses, it is required to associate this Pf with frequency based probabilities (i.ePf / year) since the computed failure probabilities is a measure of hazard and not a measure of risk unless they are associated with the consequences of the failure. This can be done by either considering the time dependent behavior of the basic variables in the probabilistic models or associating the computed Pf with frequency of the failures in the region. In this study, the frequency of previous rock slides in the previous century in Remnefjell is used for evaluation of frequency based probability to be used in risk assessment. The major consequence of a rock slide is generation of a tsunami in the lake Loen, causing inundation of residential areas around the lake. Risk is assessed by adapting damage probability matrix approach, which is originally developed for risk assessment for buildings in case of earthquake.

  1. Independent Events in Elementary Probability Theory

    ERIC Educational Resources Information Center

    Csenki, Attila

    2011-01-01

    In Probability and Statistics taught to mathematicians as a first introduction or to a non-mathematical audience, joint independence of events is introduced by requiring that the multiplication rule is satisfied. The following statement is usually tacitly assumed to hold (and, at best, intuitively motivated): If the n events E[subscript 1],…

  2. Efficient computation of the joint probability of multiple inherited risk alleles from pedigree data.

    PubMed

    Madsen, Thomas; Braun, Danielle; Peng, Gang; Parmigiani, Giovanni; Trippa, Lorenzo

    2018-06-25

    The Elston-Stewart peeling algorithm enables estimation of an individual's probability of harboring germline risk alleles based on pedigree data, and serves as the computational backbone of important genetic counseling tools. However, it remains limited to the analysis of risk alleles at a small number of genetic loci because its computing time grows exponentially with the number of loci considered. We propose a novel, approximate version of this algorithm, dubbed the peeling and paring algorithm, which scales polynomially in the number of loci. This allows extending peeling-based models to include many genetic loci. The algorithm creates a trade-off between accuracy and speed, and allows the user to control this trade-off. We provide exact bounds on the approximation error and evaluate it in realistic simulations. Results show that the loss of accuracy due to the approximation is negligible in important applications. This algorithm will improve genetic counseling tools by increasing the number of pathogenic risk alleles that can be addressed. To illustrate we create an extended five genes version of BRCAPRO, a widely used model for estimating the carrier probabilities of BRCA1 and BRCA2 risk alleles and assess its computational properties. © 2018 WILEY PERIODICALS, INC.

  3. Joint distribution of temperature and precipitation in the Mediterranean, using the Copula method

    NASA Astrophysics Data System (ADS)

    Lazoglou, Georgia; Anagnostopoulou, Christina

    2018-03-01

    This study analyses the temperature and precipitation dependence among stations in the Mediterranean. The first station group is located in the eastern Mediterranean (EM) and includes two stations, Athens and Thessaloniki, while the western (WM) one includes Malaga and Barcelona. The data was organized in two time periods, the hot-dry period and the cold-wet one, composed of 5 months, respectively. The analysis is based on a new statistical technique in climatology: the Copula method. Firstly, the calculation of the Kendall tau correlation index showed that temperatures among stations are dependant during both time periods whereas precipitation presents dependency only between the stations located in EM or WM and only during the cold-wet period. Accordingly, the marginal distributions were calculated for each studied station, as they are further used by the copula method. Finally, several copula families, both Archimedean and Elliptical, were tested in order to choose the most appropriate one to model the relation of the studied data sets. Consequently, this study achieves to model the dependence of the main climate parameters (temperature and precipitation) with the Copula method. The Frank copula was identified as the best family to describe the joint distribution of temperature, for the majority of station groups. For precipitation, the best copula families are BB1 and Survival Gumbel. Using the probability distribution diagrams, the probability of a combination of temperature and precipitation values between stations is estimated.

  4. "Combined Diagnostic Tool" APPlication to a Retrospective Series of Patients Undergoing Total Joint Revision Surgery.

    PubMed

    Gallazzi, Enrico; Drago, Lorenzo; Baldini, Andrea; Stockley, Ian; George, David A; Scarponi, Sara; Romanò, Carlo L

    2017-01-01

    Background : Differentiating between septic and aseptic joint prosthesis may be challenging, since no single test is able to confirm or rule out infection. The choice and interpretation of the panel of tests performed in any case often relies on empirical evaluation and poorly validated scores. The "Combined Diagnostic Tool (CDT)" App, a smartphone application for iOS, was developed to allow to automatically calculate the probability of having a of periprosthetic joint infection, on the basis of the relative sensitivity and specificity of the positive and negative diagnostic tests performed in any given patient. Objective : The aim of the present study was to apply the CDT software to investigate the ability of the tests routinely performed in three high-volume European centers to diagnose a periprosthetic infection. Methods : This three-center retrospective study included 120 consecutive patients undergoing total hip or knee revision, and included 65 infected patients (Group A) and 55 patients without infection (Group B). The following parameters were evaluated: number and type of positive and negative diagnostic tests performed pre-, intra- and post-operatively and resultant probability calculated by the CDT App of having a peri-prosthetic joint infection, based on pre-, intra- and post-operative combined tests. Results : Serological tests were the most common performed, with an average 2.7 tests per patient for Group A and 2.2 for Group B, followed by joint aspiration (0.9 and 0.8 tests per patient, respectively) and imaging techniques (0.5 and 0.2 test per patient). Mean CDT App calculated probability of having an infection based on pre-operative tests was 79.4% for patients in Group A and 35.7 in Group B. Twenty-nine patients in Group A had > 10% chance of not having an infection, and 29 of Group B had > 10% chance of having an infection. Conclusion : This is the first retrospective study focused on investigating the number and type of tests commonly performed prior to joint revision surgery and aimed at evaluating their combined ability to diagnose a peri-prosthetic infection. CDT App allowed us to demonstrate that, on average, the routine combination of commonly used tests is unable to diagnose pre-operatively a peri-prosthetic infection with a probability higher than 90%.

  5. Bayesian Networks in Educational Assessment

    PubMed Central

    Culbertson, Michael J.

    2015-01-01

    Bayesian networks (BN) provide a convenient and intuitive framework for specifying complex joint probability distributions and are thus well suited for modeling content domains of educational assessments at a diagnostic level. BN have been used extensively in the artificial intelligence community as student models for intelligent tutoring systems (ITS) but have received less attention among psychometricians. This critical review outlines the existing research on BN in educational assessment, providing an introduction to the ITS literature for the psychometric community, and points out several promising research paths. The online appendix lists 40 assessment systems that serve as empirical examples of the use of BN for educational assessment in a variety of domains. PMID:29881033

  6. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  7. A multi-scalar PDF approach for LES of turbulent spray combustion

    NASA Astrophysics Data System (ADS)

    Raman, Venkat; Heye, Colin

    2011-11-01

    A comprehensive joint-scalar probability density function (PDF) approach is proposed for large eddy simulation (LES) of turbulent spray combustion and tests are conducted to analyze the validity and modeling requirements. The PDF method has the advantage that the chemical source term appears closed but requires models for the small scale mixing process. A stable and consistent numerical algorithm for the LES/PDF approach is presented. To understand the modeling issues in the PDF method, direct numerical simulation of a spray flame at three different fuel droplet Stokes numbers and an equivalent gaseous flame are carried out. Assumptions in closing the subfilter conditional diffusion term in the filtered PDF transport equation are evaluated for various model forms. In addition, the validity of evaporation rate models in high Stokes number flows is analyzed.

  8. Uncertainty quantification in LES of channel flow

    DOE PAGES

    Safta, Cosmin; Blaylock, Myra; Templeton, Jeremy; ...

    2016-07-12

    Here, in this paper, we present a Bayesian framework for estimating joint densities for large eddy simulation (LES) sub-grid scale model parameters based on canonical forced isotropic turbulence direct numerical simulation (DNS) data. The framework accounts for noise in the independent variables, and we present alternative formulations for accounting for discrepancies between model and data. To generate probability densities for flow characteristics, posterior densities for sub-grid scale model parameters are propagated forward through LES of channel flow and compared with DNS data. Synthesis of the calibration and prediction results demonstrates that model parameters have an explicit filter width dependence andmore » are highly correlated. Discrepancies between DNS and calibrated LES results point to additional model form inadequacies that need to be accounted for.« less

  9. Quantum Common Causes and Quantum Causal Models

    NASA Astrophysics Data System (ADS)

    Allen, John-Mark A.; Barrett, Jonathan; Horsman, Dominic C.; Lee, Ciarán M.; Spekkens, Robert W.

    2017-07-01

    Reichenbach's principle asserts that if two observed variables are found to be correlated, then there should be a causal explanation of these correlations. Furthermore, if the explanation is in terms of a common cause, then the conditional probability distribution over the variables given the complete common cause should factorize. The principle is generalized by the formalism of causal models, in which the causal relationships among variables constrain the form of their joint probability distribution. In the quantum case, however, the observed correlations in Bell experiments cannot be explained in the manner Reichenbach's principle would seem to demand. Motivated by this, we introduce a quantum counterpart to the principle. We demonstrate that under the assumption that quantum dynamics is fundamentally unitary, if a quantum channel with input A and outputs B and C is compatible with A being a complete common cause of B and C , then it must factorize in a particular way. Finally, we show how to generalize our quantum version of Reichenbach's principle to a formalism for quantum causal models and provide examples of how the formalism works.

  10. Prevalence of different temporomandibular joint sounds, with emphasis on disc-displacement, in patients with temporomandibular disorders and controls.

    PubMed

    Elfving, Lars; Helkimo, Martti; Magnusson, Tomas

    2002-01-01

    Temporomandibular joint (TMJ) sounds are very common among patients with temporomandibular disorders (TMD), but also in non-patient populations. A variety of different causes to TMJ-sounds have been suggested e.g. arthrotic changes in the TMJs, anatomical variations, muscular incoordination and disc displacement. In the present investigation, the prevalence and type of different joint sounds were registered in 125 consecutive patients with suspected TMD and in 125 matched controls. Some kind of joint sound was recorded in 56% of the TMD patients and in 36% of the controls. The awareness of joint sounds was higher among TMD patients as compared to controls (88% and 60% respectively). The most common sound recorded in both groups was reciprocal clickings indicative of a disc displacement, while not one single case fulfilling the criteria for clicking due to a muscular incoordination was found. In the TMD group women with disc displacement reported sleeping on the stomach significantly more often than women without disc displacement did. An increased general joint laxity was found in 39% of the TMD patients with disc displacement, while this was found in only 9% of the patients with disc displacement in the control group. To conclude, disc displacement is probably the most common cause to TMJ sounds, while the existence of TMJ sounds due to a muscular incoordination can be questioned. Furthermore, sleeping on the stomach might be associated with disc displacement, while general joint laxity is probably not a causative factor, but a seeking care factor in patients with disc displacement.

  11. Bivariate at-site frequency analysis of simulated flood peak-volume data using copulas

    NASA Astrophysics Data System (ADS)

    Gaál, Ladislav; Viglione, Alberto; Szolgay, Ján.; Blöschl, Günter; Bacigál, Tomáå.¡

    2010-05-01

    In frequency analysis of joint hydro-climatological extremes (flood peaks and volumes, low flows and durations, etc.), usually, bivariate distribution functions are fitted to the observed data in order to estimate the probability of their occurrence. Bivariate models, however, have a number of limitations; therefore, in the recent past, dependence models based on copulas have gained increased attention to represent the joint probabilities of hydrological characteristics. Regardless of whether standard or copula based bivariate frequency analysis is carried out, one is generally interested in the extremes corresponding to low probabilities of the fitted joint cumulative distribution functions (CDFs). However, usually there is not enough flood data in the right tail of the empirical CDFs to derive reliable statistical inferences on the behaviour of the extremes. Therefore, different techniques are used to extend the amount of information for the statistical inference, i.e., temporal extension methods that allow for making use of historical data or spatial extension methods such as regional approaches. In this study, a different approach was adopted which uses simulated flood data by rainfall-runoff modelling, to increase the amount of data in the right tail of the CDFs. In order to generate artificial runoff data (i.e. to simulate flood records of lengths of approximately 106 years), a two-step procedure was used. (i) First, the stochastic rainfall generator proposed by Sivapalan et al. (2005) was modified for our purpose. This model is based on the assumption of discrete rainfall events whose arrival times, durations, mean rainfall intensity and the within-storm intensity patterns are all random, and can be described by specified distributions. The mean storm rainfall intensity is disaggregated further to hourly intensity patterns. (ii) Secondly, the simulated rainfall data entered a semi-distributed conceptual rainfall-runoff model that consisted of a snow routine, a soil moisture routine and a flow routing routine (Parajka et al., 2007). The applicability of the proposed method was demonstrated on selected sites in Slovakia and Austria. The pairs of simulated flood volumes and flood peaks were analysed in terms of their dependence structure and different families of copulas (Archimedean, extreme value, Gumbel-Hougaard, etc.) were fitted to the observed and simulated data. The question to what extent measured data can be used to find the right copula was discussed. The study is supported by the Austrian Academy of Sciences and the Austrian-Slovak Co-operation in Science and Education "Aktion". Parajka, J., Merz, R., Blöschl, G., 2007: Uncertainty and multiple objective calibration in regional water balance modeling - Case study in 320 Austrian catchments. Hydrological Processes, 21, 435-446. Sivapalan, M., Blöschl, G., Merz, R., Gutknecht, D., 2005: Linking flood frequency to long-term water balance: incorporating effects of seasonality. Water Resources Research, 41, W06012, doi:10.1029/2004WR003439.

  12. A probabilistic assessment of the likelihood of vegetation drought under varying climate conditions across China.

    PubMed

    Liu, Zhiyong; Li, Chao; Zhou, Ping; Chen, Xiuzhi

    2016-10-07

    Climate change significantly impacts the vegetation growth and terrestrial ecosystems. Using satellite remote sensing observations, here we focus on investigating vegetation dynamics and the likelihood of vegetation-related drought under varying climate conditions across China. We first compare temporal trends of Normalized Difference Vegetation Index (NDVI) and climatic variables over China. We find that in fact there is no significant change in vegetation over the cold regions where warming is significant. Then, we propose a joint probability model to estimate the likelihood of vegetation-related drought conditioned on different precipitation/temperature scenarios in growing season across China. To the best of our knowledge, this study is the first to examine the vegetation-related drought risk over China from a perspective based on joint probability. Our results demonstrate risk patterns of vegetation-related drought under both low and high precipitation/temperature conditions. We further identify the variations in vegetation-related drought risk under different climate conditions and the sensitivity of drought risk to climate variability. These findings provide insights for decision makers to evaluate drought risk and vegetation-related develop drought mitigation strategies over China in a warming world. The proposed methodology also has a great potential to be applied for vegetation-related drought risk assessment in other regions worldwide.

  13. A probabilistic assessment of the likelihood of vegetation drought under varying climate conditions across China

    PubMed Central

    Liu, Zhiyong; Li, Chao; Zhou, Ping; Chen, Xiuzhi

    2016-01-01

    Climate change significantly impacts the vegetation growth and terrestrial ecosystems. Using satellite remote sensing observations, here we focus on investigating vegetation dynamics and the likelihood of vegetation-related drought under varying climate conditions across China. We first compare temporal trends of Normalized Difference Vegetation Index (NDVI) and climatic variables over China. We find that in fact there is no significant change in vegetation over the cold regions where warming is significant. Then, we propose a joint probability model to estimate the likelihood of vegetation-related drought conditioned on different precipitation/temperature scenarios in growing season across China. To the best of our knowledge, this study is the first to examine the vegetation-related drought risk over China from a perspective based on joint probability. Our results demonstrate risk patterns of vegetation-related drought under both low and high precipitation/temperature conditions. We further identify the variations in vegetation-related drought risk under different climate conditions and the sensitivity of drought risk to climate variability. These findings provide insights for decision makers to evaluate drought risk and vegetation-related develop drought mitigation strategies over China in a warming world. The proposed methodology also has a great potential to be applied for vegetation-related drought risk assessment in other regions worldwide. PMID:27713530

  14. TH-A-BRF-02: BEST IN PHYSICS (JOINT IMAGING-THERAPY) - Modeling Tumor Evolution for Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Lee, CG; Chan, TCY

    2014-06-15

    Purpose: To develop mathematical models of tumor geometry changes under radiotherapy that may support future adaptive paradigms. Methods: A total of 29 cervical patients were scanned using MRI, once for planning and weekly thereafter for treatment monitoring. Using the tumor volumes contoured by a radiologist, three mathematical models were investigated based on the assumption of a stochastic process of tumor evolution. The “weekly MRI” model predicts tumor geometry for the following week from the last two consecutive MRI scans, based on the voxel transition probability. The other two models use only the first pair of consecutive MRI scans, and themore » transition probabilities were estimated via tumor type classified from the entire data set. The classification is based on either measuring the tumor volume (the “weekly volume” model), or implementing an auxiliary “Markov chain” model. These models were compared to a constant volume approach that represents the current clinical practice, using various model parameters; e.g., the threshold probability β converts the probability map into a tumor shape (larger threshold implies smaller tumor). Model performance was measured using volume conformity index (VCI), i.e., the union of the actual target and modeled target volume squared divided by product of these two volumes. Results: The “weekly MRI” model outperforms the constant volume model by 26% on average, and by 103% for the worst 10% of cases in terms of VCI under a wide range of β. The “weekly volume” and “Markov chain” models outperform the constant volume model by 20% and 16% on average, respectively. They also perform better than the “weekly MRI” model when β is large. Conclusion: It has been demonstrated that mathematical models can be developed to predict tumor geometry changes for cervical cancer undergoing radiotherapy. The models can potentially support adaptive radiotherapy paradigm by reducing normal tissue dose. This research was supported in part by the Ontario Consortium for Adaptive Interventions in Radiation Oncology (OCAIRO) funded by the Ontario Research Fund (ORF) and the MITACS Accelerate Internship Program.« less

  15. A New Approach in Generating Meteorological Forecasts for Ensemble Streamflow Forecasting using Multivariate Functions

    NASA Astrophysics Data System (ADS)

    Khajehei, S.; Madadgar, S.; Moradkhani, H.

    2014-12-01

    The reliability and accuracy of hydrological predictions are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model parameters and model structure. To reduce the total uncertainty in hydrological applications, one approach is to reduce the uncertainty in meteorological forcing by using the statistical methods based on the conditional probability density functions (pdf). However, one of the requirements for current methods is to assume the Gaussian distribution for the marginal distribution of the observed and modeled meteorology. Here we propose a Bayesian approach based on Copula functions to develop the conditional distribution of precipitation forecast needed in deriving a hydrologic model for a sub-basin in the Columbia River Basin. Copula functions are introduced as an alternative approach in capturing the uncertainties related to meteorological forcing. Copulas are multivariate joint distribution of univariate marginal distributions, which are capable to model the joint behavior of variables with any level of correlation and dependency. The method is applied to the monthly forecast of CPC with 0.25x0.25 degree resolution to reproduce the PRISM dataset over 1970-2000. Results are compared with Ensemble Pre-Processor approach as a common procedure used by National Weather Service River forecast centers in reproducing observed climatology during a ten-year verification period (2000-2010).

  16. A pitfall of piecewise-polytropic equation of state inference

    NASA Astrophysics Data System (ADS)

    Raaijmakers, Geert; Riley, Thomas E.; Watts, Anna L.

    2018-05-01

    The only messenger radiation in the Universe which one can use to statistically probe the Equation of State (EOS) of cold dense matter is that originating from the near-field vicinities of compact stars. Constraining gravitational masses and equatorial radii of rotating compact stars is a major goal for current and future telescope missions, with a primary purpose of constraining the EOS. From a Bayesian perspective it is necessary to carefully discuss prior definition; in this context a complicating issue is that in practice there exist pathologies in the general relativistic mapping between spaces of local (interior source matter) and global (exterior spacetime) parameters. In a companion paper, these issues were raised on a theoretical basis. In this study we reproduce a probability transformation procedure from the literature in order to map a joint posterior distribution of Schwarzschild gravitational masses and radii into a joint posterior distribution of EOS parameters. We demonstrate computationally that EOS parameter inferences are sensitive to the choice to define a prior on a joint space of these masses and radii, instead of on a joint space interior source matter parameters. We focus on the piecewise-polytropic EOS model, which is currently standard in the field of astrophysical dense matter study. We discuss the implications of this issue for the field.

  17. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    PubMed

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  18. Uncertainty Models for Knowledge-Based Systems

    DTIC Science & Technology

    1991-08-01

    sists of the subsets of a constituting the a-algebra A such that = Pr Note that the sum of (A over A in general exceeds unity or is divergent . (Of...f is a g.d.f. Hence, again PCf = -Pf I - Pf = Cf Now, analogous to Sklar’s Theroem mentioned previously, define ior any n a 1 C n U (SBVq n ((sBVq [0...then extends the following basic theroem in a natural way to the above example: The entropy of a joint probability function which has independent

  19. Pharmacologic Hemostatic Agents in Total Joint Arthroplasty-A Cost-Effectiveness Analysis.

    PubMed

    Ramkumar, Dipak B; Ramkumar, Niveditta; Tapp, Stephanie J; Moschetti, Wayne E

    2018-03-03

    Total knee and hip arthroplasties can be associated with substantial blood loss, affecting morbidity and even mortality. Two pharmacological antifibrinolytics, ε-aminocaproic acid (EACA) and tranexamic acid (TXA) have been used to minimize perioperative blood loss, but both have associated morbidity. Given the added cost of these medications and the risks associated with then, a cost-effectiveness analysis was undertaken to ascertain the best strategy. A cost-effectiveness model was constructed using the payoffs of cost (in United States dollars) and effectiveness (quality-adjusted life expectancy, in days). The medical literature was used to ascertain various complications, their probabilities, utility values, and direct medical costs associated with various health states. A time horizon of 10 years and a willingness to pay threshold of $100,000 was used. The total cost and effectiveness (quality-adjusted life expectancy, in days) was $459.77, $951.22, and $1174.87 and 3411.19, 3248.02, and 3342.69 for TXA, no pharmacologic hemostatic agent, and EACA, respectively. Because TXA is less expensive and more effective than the competing alternatives, it was the favored strategy. One-way sensitivity analyses for probability of transfusion and myocardial infarction for all 3 strategies revealed that TXA remains the dominant strategy across all clinically plausible values. TXA, when compared with no pharmacologic hemostatic agent and with EACA, is the most cost-effective strategy to minimize intraoperative blood loss in hip and knee total joint arthroplasties. These findings are robust to sensitivity analyses using clinically plausible probabilities. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Passage relevance models for genomics search.

    PubMed

    Urbain, Jay; Frieder, Ophir; Goharian, Nazli

    2009-03-19

    We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.

  1. Comparison of Probabilistic Coastal Inundation Maps Based on Historical Storms and Statistically Modeled Storm Ensemble

    NASA Astrophysics Data System (ADS)

    Feng, X.; Sheng, Y.; Condon, A. J.; Paramygin, V. A.; Hall, T.

    2012-12-01

    A cost effective method, JPM-OS (Joint Probability Method with Optimal Sampling), for determining storm response and inundation return frequencies was developed and applied to quantify the hazard of hurricane storm surges and inundation along the Southwest FL,US coast (Condon and Sheng 2012). The JPM-OS uses piecewise multivariate regression splines coupled with dimension adaptive sparse grids to enable the generation of a base flood elevation (BFE) map. Storms are characterized by their landfall characteristics (pressure deficit, radius to maximum winds, forward speed, heading, and landfall location) and a sparse grid algorithm determines the optimal set of storm parameter combinations so that the inundation from any other storm parameter combination can be determined. The end result is a sample of a few hundred (197 for SW FL) optimal storms which are simulated using a dynamically coupled storm surge / wave modeling system CH3D-SSMS (Sheng et al. 2010). The limited historical climatology (1940 - 2009) is explored to develop probabilistic characterizations of the five storm parameters. The probability distributions are discretized and the inundation response of all parameter combinations is determined by the interpolation in five-dimensional space of the optimal storms. The surge response and the associated joint probability of the parameter combination is used to determine the flood elevation with a 1% annual probability of occurrence. The limited historical data constrains the accuracy of the PDFs of the hurricane characteristics, which in turn affect the accuracy of the BFE maps calculated. To offset the deficiency of limited historical dataset, this study presents a different method for producing coastal inundation maps. Instead of using the historical storm data, here we adopt 33,731 tracks that can represent the storm climatology in North Atlantic basin and SW Florida coasts. This large quantity of hurricane tracks is generated from a new statistical model which had been used for Western North Pacific (WNP) tropical cyclone (TC) genesis (Hall 2011) as well as North Atlantic tropical cyclone genesis (Hall and Jewson 2007). The introduction of these tracks complements the shortage of the historical samples and allows for more reliable PDFs required for implementation of JPM-OS. Using the 33,731 tracks and JPM-OS, an optimal storm ensemble is determined. This approach results in different storms/winds for storm surge and inundation modeling, and produces different Base Flood Elevation maps for coastal regions. Coastal inundation maps produced by the two different methods will be discussed in detail in the poster paper.

  2. Finding Bounded Rational Equilibria. Part 2; Alternative Lagrangians and Uncountable Move Spaces

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2004-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality characterizing all real-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. It has recently been shown that the same information theoretic mathematical structure, known as Probability Collectives (PC) underlies both issues. This relationship between statistical physics and game theory allows techniques and insights &om the one field to be applied to the other. In particular, PC provides a formal model-independent definition of the degree of rationality of a player and of bounded rationality equilibria. This pair of papers extends previous work on PC by introducing new computational approaches to effectively find bounded rationality equilibria of common-interest (team) games.

  3. N -tag probability law of the symmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Poncet, Alexis; Bénichou, Olivier; Démery, Vincent; Oshanin, Gleb

    2018-06-01

    The symmetric exclusion process (SEP), in which particles hop symmetrically on a discrete line with hard-core constraints, is a paradigmatic model of subdiffusion in confined systems. This anomalous behavior is a direct consequence of strong spatial correlations induced by the requirement that the particles cannot overtake each other. Even if this fact has been recognized qualitatively for a long time, up to now there has been no full quantitative determination of these correlations. Here we study the joint probability distribution of an arbitrary number of tagged particles in the SEP. We determine analytically its large-time limit for an arbitrary density of particles, and its full dynamics in the high-density limit. In this limit, we obtain the time-dependent large deviation function of the problem and unveil a universal scaling form shared by the cumulants.

  4. Braided Carbon Fiber Rope Flow Characteristics. Degree awarded by Utah Univ.

    NASA Technical Reports Server (NTRS)

    Heman, J. R. C.; McCool, A. (Technical Monitor)

    2000-01-01

    I am submitting the following technical subject for consideration as a thesis topic for the master degree: The reusable solid rocket motor (RSRM) nozzle internal joints are being evaluated for the incorporation of a carbon fiber rope (CFR) as a thermal barrier. The CFR is approximately 0.260 in. diameter and is composed of approximately 12,000 carbon fibers, woven in ten sheaths or layers. The CFR is manufactured by a sub-tier vendor and subsequently several of its manufacturing details are proprietary to that vendor. The CFR design intent is to prevent hot motor combustion products and slag from intruding into the joint scaling area while still approaching a vented joint design to avoid the detriments of gas jet impingement. As a member of the Heat Transfer section at Thiokol Propulsion, two main goals exist as part of this NASA funded design effort: (1) development of flow model through the CFR and (2) development of a heat transfer model through the CFR. While both models are needed and most probably interrelated, the gas flow model is being targeted as the subject matter. Essentially, the topic would be "Modeling of Gas Flow through a Braided Carbon Fiber Rope". An AIAA journal or conference paper is being considered through Thiokol/NASA as well. A sub-scale CFR flow test fixture was designed to simulate the relative levels of CFR compression. The test fixture provides the means to measure gas mass flow rate upstream of the CFR and the pressure and temperature both upstream and downstream of the CFR. The test fixture was designed to eliminate the possibility of dynamic gapping at the CFR location and provide minimal flow resistance to ambient for gases exiting the rope. The data collected in the experiment will be evaluated to define a permeability/flow resistance model. Two possibilities exist for the flow characteristics through the CFR from choked flow to strictly friction driven. A test matrix for evaluating the CFR has been compiled, which addresses both of these characteristics. The range of pressures to be tested covers a relatively low delta pressure where non-choked flow is impossible, while the high pressure shown is dictated by the RSRM joint operating pressure where choking is possible. The test matrix, was also designed for a range of rope compressions or test fixture gaps ranging from 0.025" to 0.070". These gaps are controlled by the range of RSRM full-scale hardware joint gaps that will be expected by virtue of the joint design.

  5. Qualitative and Quantitative Proofs of Security Properties

    DTIC Science & Technology

    2013-04-01

    Naples, Italy (September 2012) – Australasian Joint Conference on Artifical Intelligence (December 2012). • Causality, Responsibility, and Blame...realistic solution concept, Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009), 2009, pp. 153–158. 17. J...Conference on Artificial Intelligence (AAAI-12), 2012, pp. 1917-1923. 29. J. Y. Halpern and S. Leung, Weighted sets of probabilities and minimax

  6. Tomographic measurement of joint photon statistics of the twin-beam quantum state

    PubMed

    Vasilyev; Choi; Kumar; D'Ariano

    2000-03-13

    We report the first measurement of the joint photon-number probability distribution for a two-mode quantum state created by a nondegenerate optical parametric amplifier. The measured distributions exhibit up to 1.9 dB of quantum correlation between the signal and idler photon numbers, whereas the marginal distributions are thermal as expected for parametric fluorescence.

  7. Inverse-Probability-Weighted Estimation for Monotone and Nonmonotone Missing Data.

    PubMed

    Sun, BaoLuo; Perkins, Neil J; Cole, Stephen R; Harel, Ofer; Mitchell, Emily M; Schisterman, Enrique F; Tchetgen Tchetgen, Eric J

    2018-03-01

    Missing data is a common occurrence in epidemiologic research. In this paper, 3 data sets with induced missing values from the Collaborative Perinatal Project, a multisite US study conducted from 1959 to 1974, are provided as examples of prototypical epidemiologic studies with missing data. Our goal was to estimate the association of maternal smoking behavior with spontaneous abortion while adjusting for numerous confounders. At the same time, we did not necessarily wish to evaluate the joint distribution among potentially unobserved covariates, which is seldom the subject of substantive scientific interest. The inverse probability weighting (IPW) approach preserves the semiparametric structure of the underlying model of substantive interest and clearly separates the model of substantive interest from the model used to account for the missing data. However, IPW often will not result in valid inference if the missing-data pattern is nonmonotone, even if the data are missing at random. We describe a recently proposed approach to modeling nonmonotone missing-data mechanisms under missingness at random to use in constructing the weights in IPW complete-case estimation, and we illustrate the approach using 3 data sets described in a companion article (Am J Epidemiol. 2018;187(3):568-575).

  8. Inverse-Probability-Weighted Estimation for Monotone and Nonmonotone Missing Data

    PubMed Central

    Sun, BaoLuo; Perkins, Neil J; Cole, Stephen R; Harel, Ofer; Mitchell, Emily M; Schisterman, Enrique F; Tchetgen Tchetgen, Eric J

    2018-01-01

    Abstract Missing data is a common occurrence in epidemiologic research. In this paper, 3 data sets with induced missing values from the Collaborative Perinatal Project, a multisite US study conducted from 1959 to 1974, are provided as examples of prototypical epidemiologic studies with missing data. Our goal was to estimate the association of maternal smoking behavior with spontaneous abortion while adjusting for numerous confounders. At the same time, we did not necessarily wish to evaluate the joint distribution among potentially unobserved covariates, which is seldom the subject of substantive scientific interest. The inverse probability weighting (IPW) approach preserves the semiparametric structure of the underlying model of substantive interest and clearly separates the model of substantive interest from the model used to account for the missing data. However, IPW often will not result in valid inference if the missing-data pattern is nonmonotone, even if the data are missing at random. We describe a recently proposed approach to modeling nonmonotone missing-data mechanisms under missingness at random to use in constructing the weights in IPW complete-case estimation, and we illustrate the approach using 3 data sets described in a companion article (Am J Epidemiol. 2018;187(3):568–575). PMID:29165557

  9. Training models of anatomic shape variability

    PubMed Central

    Merck, Derek; Tracton, Gregg; Saboo, Rohit; Levy, Joshua; Chaney, Edward; Pizer, Stephen; Joshi, Sarang

    2008-01-01

    Learning probability distributions of the shape of anatomic structures requires fitting shape representations to human expert segmentations from training sets of medical images. The quality of statistical segmentation and registration methods is directly related to the quality of this initial shape fitting, yet the subject is largely overlooked or described in an ad hoc way. This article presents a set of general principles to guide such training. Our novel method is to jointly estimate both the best geometric model for any given image and the shape distribution for the entire population of training images by iteratively relaxing purely geometric constraints in favor of the converging shape probabilities as the fitted objects converge to their target segmentations. The geometric constraints are carefully crafted both to obtain legal, nonself-interpenetrating shapes and to impose the model-to-model correspondences required for useful statistical analysis. The paper closes with example applications of the method to synthetic and real patient CT image sets, including same patient male pelvis and head and neck images, and cross patient kidney and brain images. Finally, we outline how this shape training serves as the basis for our approach to IGRT∕ART. PMID:18777919

  10. Statistics of Optical Coherence Tomography Data From Human Retina

    PubMed Central

    de Juan, Joaquín; Ferrone, Claudia; Giannini, Daniela; Huang, David; Koch, Giorgio; Russo, Valentina; Tan, Ou; Bruni, Carlo

    2010-01-01

    Optical coherence tomography (OCT) has recently become one of the primary methods for noninvasive probing of the human retina. The pseudoimage formed by OCT (the so-called B-scan) varies probabilistically across pixels due to complexities in the measurement technique. Hence, sensitive automatic procedures of diagnosis using OCT may exploit statistical analysis of the spatial distribution of reflectance. In this paper, we perform a statistical study of retinal OCT data. We find that the stretched exponential probability density function can model well the distribution of intensities in OCT pseudoimages. Moreover, we show a small, but significant correlation between neighbor pixels when measuring OCT intensities with pixels of about 5 µm. We then develop a simple joint probability model for the OCT data consistent with known retinal features. This model fits well the stretched exponential distribution of intensities and their spatial correlation. In normal retinas, fit parameters of this model are relatively constant along retinal layers, but varies across layers. However, in retinas with diabetic retinopathy, large spikes of parameter modulation interrupt the constancy within layers, exactly where pathologies are visible. We argue that these results give hope for improvement in statistical pathology-detection methods even when the disease is in its early stages. PMID:20304733

  11. Joint System Prognostics For Increased Efficiency And Risk Mitigation In Advanced Nuclear Reactor Instrumentation and Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donald D. Dudenhoeffer; Tuan Q. Tran; Ronald L. Boring

    2006-08-01

    The science of prognostics is analogous to a doctor who, based on a set of symptoms and patient tests, assesses a probable cause, the risk to the patient, and a course of action for recovery. While traditional prognostics research has focused on the aspect of hydraulic and mechanical systems and associated failures, this project will take a joint view in focusing not only on the digital I&C aspect of reliability and risk, but also on the risks associated with the human element. Model development will not only include an approximation of the control system physical degradation but also on humanmore » performance degradation. Thus the goal of the prognostic system is to evaluate control room operation; to identify and potentially take action when performance degradation reduces plant efficiency, reliability or safety.« less

  12. The Probable Explanation for the Low Friction of Natural Joints.

    PubMed

    Pawlak, Zenon; Urbaniak, Wieslaw; Hagner-Derengowska, Magdalena; Hagner, Wojciech

    2015-04-01

    The surface of an articular cartilage, coated with phospholipid (PL) bilayers, plays an important role in its lubrication and movement. Intact (normal) and depleted surfaces of the joint were modelled and the pH influence on the surface interfacial energy, wettability and friction were investigated. In the experiments, the deterioration of the PL bilayer was controlled by its wettability and the applied friction. The surrounding fluid of an undamaged articular cartilage, the synovial fluid, has a pH value of approximately 7.4. Buffer solutions were formulated to represent the synovial fluid with various pH values. It was found that the surface interfacial energy was stabilised at its lowest values when the pH varied between 6.5 and 9.5. These results suggested that as the PL bilayers deteriorated, the hydration repulsion mechanism became less effective as friction increased. The decreased number of bilayers changed the wettability and lowered PL lubricant properties.

  13. New color-based tracking algorithm for joints of the upper extremities

    NASA Astrophysics Data System (ADS)

    Wu, Xiangping; Chow, Daniel H. K.; Zheng, Xiaoxiang

    2007-11-01

    To track the joints of the upper limb of stroke sufferers for rehabilitation assessment, a new tracking algorithm which utilizes a developed color-based particle filter and a novel strategy for handling occlusions is proposed in this paper. Objects are represented by their color histogram models and particle filter is introduced to track the objects within a probability framework. Kalman filter, as a local optimizer, is integrated into the sampling stage of the particle filter that steers samples to a region with high likelihood and therefore fewer samples is required. A color clustering method and anatomic constraints are used in dealing with occlusion problem. Compared with the general basic particle filtering method, the experimental results show that the new algorithm has reduced the number of samples and hence the computational consumption, and has achieved better abilities of handling complete occlusion over a few frames.

  14. Diagnostics for Confounding of Time-varying and Other Joint Exposures.

    PubMed

    Jackson, John W

    2016-11-01

    The effects of joint exposures (or exposure regimes) include those of adhering to assigned treatment versus placebo in a randomized controlled trial, duration of exposure in a cohort study, interactions between exposures, and direct effects of exposure, among others. Unlike the setting of a single point exposure (e.g., propensity score matching), there are few tools to describe confounding for joint exposures or how well a method resolves it. Investigators need tools that describe confounding in ways that are conceptually grounded and intuitive for those who read, review, and use applied research to guide policy. We revisit the implications of exchangeability conditions that hold in sequentially randomized trials, and the bias structure that motivates the use of g-methods, such as marginal structural models. From these, we develop covariate balance diagnostics for joint exposures that can (1) describe time-varying confounding, (2) assess whether covariates are predicted by prior exposures given their past, the indication for g-methods, and (3) describe residual confounding after inverse probability weighting. For each diagnostic, we present time-specific metrics that encompass a wide class of joint exposures, including regimes of multivariate time-varying exposures in censored data, with multivariate point exposures as a special case. We outline how to estimate these directly or with regression and how to average them over person-time. Using a simulated example, we show how these metrics can be presented graphically. This conceptually grounded framework can potentially aid the transparent design, analysis, and reporting of studies that examine joint exposures. We provide easy-to-use tools to implement it.

  15. Treatment with unsaponifiable extracts of avocado and soybean increases TGF-beta1 and TGF-beta2 levels in canine joint fluid.

    PubMed

    Altinel, Levent; Saritas, Z Kadir; Kose, Kamil Cagri; Pamuk, Kamuran; Aksoy, Yusuf; Serteser, Mustafa

    2007-02-01

    Avocado and soya unsaponifiables (ASU) are plant extracts used as a slow-acting antiarthritic agent. ASU stimulate the synthesis of matrix components by chondrocytes, probably by increasing the production of transforming growth factor-beta (TGF-beta). TGF-beta is expressed by chondrocytes and osteoblasts and is present in cartilage matrix. This study investigates the effect of ASU treatment on the levels of two isoforms of TGFbeta, TGF-beta1 and TGF-beta2, in the knee joint fluid using a canine model. Twenty-four outbred dogs were divided into three groups. The control animals were given a normal diet, while the treated animals were given 300 mg ASU every three days or every day. Joint fluid samples were obtained prior to treatment, and at the end of every month (up to three months). TGF-beta1 and TGF-beta2 levels were measured using a quantitative sandwich enzyme immunoassay technique. ASU treatment caused an increase in TGF-beta1 and TGF-beta2 levels in the joint fluid when compared to controls. The different doses did not cause a significant difference in joint fluid TGF levels. TGF-beta1 levels in the treated animals reached maximum values at the end of the second month and then decreased after the third month, while TGF-beta2 levels showed a marginal increase during the first two months, followed by a marked increase at the end of the third month. In conclusion, ASU increased both TGF-beta1 and TGF-beta2 levels in knee joint fluid.

  16. A multivariate quadrature based moment method for LES based modeling of supersonic combustion

    NASA Astrophysics Data System (ADS)

    Donde, Pratik; Koo, Heeseok; Raman, Venkat

    2012-07-01

    The transported probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of scramjet combustors. In this approach, a high-dimensional transport equation for the joint composition-enthalpy PDF needs to be solved. Quadrature based approaches provide deterministic Eulerian methods for solving the joint-PDF transport equation. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach.

  17. Survival behavior in the cyclic Lotka-Volterra model with a randomly switching reaction rate

    NASA Astrophysics Data System (ADS)

    West, Robert; Mobilia, Mauro; Rucklidge, Alastair M.

    2018-02-01

    We study the influence of a randomly switching reproduction-predation rate on the survival behavior of the nonspatial cyclic Lotka-Volterra model, also known as the zero-sum rock-paper-scissors game, used to metaphorically describe the cyclic competition between three species. In large and finite populations, demographic fluctuations (internal noise) drive two species to extinction in a finite time, while the species with the smallest reproduction-predation rate is the most likely to be the surviving one (law of the weakest). Here we model environmental (external) noise by assuming that the reproduction-predation rate of the strongest species (the fastest to reproduce and predate) in a given static environment randomly switches between two values corresponding to more and less favorable external conditions. We study the joint effect of environmental and demographic noise on the species survival probabilities and on the mean extinction time. In particular, we investigate whether the survival probabilities follow the law of the weakest and analyze their dependence on the external noise intensity and switching rate. Remarkably, when, on average, there is a finite number of switches prior to extinction, the survival probability of the predator of the species whose reaction rate switches typically varies nonmonotonically with the external noise intensity (with optimal survival about a critical noise strength). We also outline the relationship with the case where all reaction rates switch on markedly different time scales.

  18. Three-Dimensional Geometric Nonlinear Contact Stress Analysis of Riveted Joints

    NASA Technical Reports Server (NTRS)

    Shivakumar, Kunigal N.; Ramanujapuram, Vivek

    1998-01-01

    The problems associated with fatigue were brought into the forefront of research by the explosive decompression and structural failure of the Aloha Airlines Flight 243 in 1988. The structural failure of this airplane has been attributed to debonding and multiple cracking along the longitudinal lap splice riveted joint in the fuselage. This crash created what may be termed as a minor "Structural Integrity Revolution" in the commercial transport industry. Major steps have been taken by the manufacturers, operators and authorities to improve the structural airworthiness of the aging fleet of airplanes. Notwithstanding, this considerable effort there are still outstanding issues and concerns related to the formulation of Widespread Fatigue Damage which is believed to have been a contributing factor in the probable cause of the Aloha accident. The lesson from this accident was that Multiple-Site Damage (MSD) in "aging" aircraft can lead to extensive aircraft damage. A strong candidate in which MSD is highly probable to occur is the riveted lap joint.

  19. A stochastic storm surge generator for the German North Sea and the multivariate statistical assessment of the simulation results

    NASA Astrophysics Data System (ADS)

    Wahl, Thomas; Jensen, Jürgen; Mudersbach, Christoph

    2010-05-01

    Storm surges along the German North Sea coastline led to major damages in the past and the risk of inundation is expected to increase in the course of an ongoing climate change. The knowledge of the characteristics of possible storm surges is essential for the performance of integrated risk analyses, e.g. based on the source-pathway-receptor concept. The latter includes the storm surge simulation/analyses (source), modelling of dike/dune breach scenarios (pathway) and the quantification of potential losses (receptor). In subproject 1b of the German joint research project XtremRisK (www.xtremrisk.de), a stochastic storm surge generator for the south-eastern North Sea area is developed. The input data for the multivariate model are high resolution sea level observations from tide gauges during extreme events. Based on 25 parameters (19 sea level parameters and 6 time parameters) observed storm surge hydrographs consisting of three tides are parameterised. Followed by the adaption of common parametric probability distributions and a large number of Monte-Carlo-Simulations, the final reconstruction leads to a set of 100.000 (default) synthetic storm surge events with a one-minute resolution. Such a data set can potentially serve as the basis for a large number of applications. For risk analyses, storm surges with peak water levels exceeding the design water levels are of special interest. The occurrence probabilities of the simulated extreme events are estimated based on multivariate statistics, considering the parameters "peak water level" and "fullness/intensity". In the past, most studies considered only the peak water levels during extreme events, which might not be the most important parameter in any cases. Here, a 2D-Archimedian copula model is used for the estimation of the joint probabilities of the selected parameters, accounting for the structures of dependence overlooking the margins. In coordination with subproject 1a, the results will be used as the input for the XtremRisK subprojects 2 to 4. The project is funded by the German Federal Ministry of Education and Research (BMBF) (Project No. 03 F 0483 B).

  20. Mathematical and physical meaning of the Bell inequalities

    NASA Astrophysics Data System (ADS)

    Santos, Emilio

    2016-09-01

    It is shown that the Bell inequalities are closely related to the triangle inequalities involving distance functions amongst pairs of random variables with values \\{0,1\\}. A hidden variables model may be defined as a mapping between a set of quantum projection operators and a set of random variables. The model is noncontextual if there is a joint probability distribution. The Bell inequalities are necessary conditions for its existence. The inequalities are most relevant when measurements are performed at space-like separation, thus showing a conflict between quantum mechanics and local realism (Bell's theorem). The relations of the Bell inequalities with contextuality, the Kochen-Specker theorem, and quantum entanglement are briefly discussed.

  1. A model describing vestibular detection of body sway motion.

    NASA Technical Reports Server (NTRS)

    Nashner, L. M.

    1971-01-01

    An experimental technique was developed which facilitated the formulation of a quantitative model describing vestibular detection of body sway motion in a postural response mode. All cues, except vestibular ones, which gave a subject an indication that he was beginning to sway, were eliminated using a specially designed two-degree-of-freedom platform; body sway was then induced and resulting compensatory responses at the ankle joints measured. Hybrid simulation compared the experimental results with models of the semicircular canals and utricular otolith receptors. Dynamic characteristics of the resulting canal model compared closely with characteristics of models which describe eye movement and subjective responses to body rotational motions. The average threshold level, in the postural response mode, however, was considerably lower. Analysis indicated that the otoliths probably play no role in the initial detection of body sway motion.

  2. A hybrid probabilistic/spectral model of scalar mixing

    NASA Astrophysics Data System (ADS)

    Vaithianathan, T.; Collins, Lance

    2002-11-01

    In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.

  3. Investigations of turbulent scalar fields using probability density function approach

    NASA Technical Reports Server (NTRS)

    Gao, Feng

    1991-01-01

    Scalar fields undergoing random advection have attracted much attention from researchers in both the theoretical and practical sectors. Research interest spans from the study of the small scale structures of turbulent scalar fields to the modeling and simulations of turbulent reacting flows. The probability density function (PDF) method is an effective tool in the study of turbulent scalar fields, especially for those which involve chemical reactions. It has been argued that a one-point, joint PDF approach is the one to choose from among many simulation and closure methods for turbulent combustion and chemically reacting flows based on its practical feasibility in the foreseeable future for multiple reactants. Instead of the multi-point PDF, the joint PDF of a scalar and its gradient which represents the roles of both scalar and scalar diffusion is introduced. A proper closure model for the molecular diffusion term in the PDF equation is investigated. Another direction in this research is to study the mapping closure method that has been recently proposed to deal with the PDF's in turbulent fields. This method seems to have captured the physics correctly when applied to diffusion problems. However, if the turbulent stretching is included, the amplitude mapping has to be supplemented by either adjusting the parameters representing turbulent stretching at each time step or by introducing the coordinate mapping. This technique is still under development and seems to be quite promising. The final objective of this project is to understand some fundamental properties of the turbulent scalar fields and to develop practical numerical schemes that are capable of handling turbulent reacting flows.

  4. Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers

    NASA Astrophysics Data System (ADS)

    Samiei-Esfahany, Sami; Hanssen, Ramon F.

    2012-01-01

    The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.

  5. Dependent Neyman type A processes based on common shock Poisson approach

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Kadilar, Cem

    2016-04-01

    The Neyman type A process is used for describing clustered data since the Poisson process is insufficient for clustering of events. In a multivariate setting, there may be dependencies between multivarite Neyman type A processes. In this study, dependent form of the Neyman type A process is considered under common shock approach. Then, the joint probability function are derived for the dependent Neyman type A Poisson processes. Then, an application based on forest fires in Turkey are given. The results show that the joint probability function of the dependent Neyman type A processes, which is obtained in this study, can be a good tool for the probabilistic fitness for the total number of burned trees in Turkey.

  6. Traumatic synovitis in a classical guitarist: a study of joint laxity.

    PubMed

    Bird, H A; Wright, V

    1981-04-01

    A classical guitarist performing for at least 5 hours each day developed a traumatic synovitis at the left wrist joint that was first erroneously considered to be rheumatoid arthritis. Comparison with members of the same guitar class suggested that unusual joint laxity of the fingers and wrist, probably inherited from the patient's father, was of more importance in the aetiology of the synovitis than a wide range of movement acquired by regular practice. Hyperextension of the metacarpophalangeal joint of the left index finger, quantified by the hyperextensometer, was less marked in the guitarists than in 100 normal individuals. This may be attributed to greater muscular control of the fingers. Lateral instability in the loaded joint may be the most important factor in the aetiology of traumatic synovitis.

  7. Traumatic synovitis in a classical guitarist: a study of joint laxity.

    PubMed Central

    Bird, H A; Wright, V

    1981-01-01

    A classical guitarist performing for at least 5 hours each day developed a traumatic synovitis at the left wrist joint that was first erroneously considered to be rheumatoid arthritis. Comparison with members of the same guitar class suggested that unusual joint laxity of the fingers and wrist, probably inherited from the patient's father, was of more importance in the aetiology of the synovitis than a wide range of movement acquired by regular practice. Hyperextension of the metacarpophalangeal joint of the left index finger, quantified by the hyperextensometer, was less marked in the guitarists than in 100 normal individuals. This may be attributed to greater muscular control of the fingers. Lateral instability in the loaded joint may be the most important factor in the aetiology of traumatic synovitis. Images PMID:7224687

  8. Quantifying and comparing dynamic predictive accuracy of joint models for longitudinal marker and time-to-event in presence of censoring and competing risks.

    PubMed

    Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène

    2015-03-01

    Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.

  9. Multiple imputation to account for measurement error in marginal structural models

    PubMed Central

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  10. Background for Joint Systems Aspects of AIR 6000

    DTIC Science & Technology

    2000-04-01

    Checkland’s Soft Systems Methodology [7, 8,9]. The analytical techniques that are proposed for joint systems work are based on calculating probability...Supporting Global Interests 21 DSTO-CR-0155 SLMP Structural Life Management Plan SOW Stand-Off Weapon SSM Soft Systems Methodology UAV Uninhabited Aerial... Systems Methodology in Action, John Wiley & Sons, Chichester, 1990. [101 Pearl, Judea, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible

  11. Bayesian calibration of mechanistic aquatic biogeochemical models and benefits for environmental management

    NASA Astrophysics Data System (ADS)

    Arhonditsis, George B.; Papantou, Dimitra; Zhang, Weitao; Perhar, Gurbir; Massos, Evangelia; Shi, Molu

    2008-09-01

    Aquatic biogeochemical models have been an indispensable tool for addressing pressing environmental issues, e.g., understanding oceanic response to climate change, elucidation of the interplay between plankton dynamics and atmospheric CO 2 levels, and examination of alternative management schemes for eutrophication control. Their ability to form the scientific basis for environmental management decisions can be undermined by the underlying structural and parametric uncertainty. In this study, we outline how we can attain realistic predictive links between management actions and ecosystem response through a probabilistic framework that accommodates rigorous uncertainty analysis of a variety of error sources, i.e., measurement error, parameter uncertainty, discrepancy between model and natural system. Because model uncertainty analysis essentially aims to quantify the joint probability distribution of model parameters and to make inference about this distribution, we believe that the iterative nature of Bayes' Theorem is a logical means to incorporate existing knowledge and update the joint distribution as new information becomes available. The statistical methodology begins with the characterization of parameter uncertainty in the form of probability distributions, then water quality data are used to update the distributions, and yield posterior parameter estimates along with predictive uncertainty bounds. Our illustration is based on a six state variable (nitrate, ammonium, dissolved organic nitrogen, phytoplankton, zooplankton, and bacteria) ecological model developed for gaining insight into the mechanisms that drive plankton dynamics in a coastal embayment; the Gulf of Gera, Island of Lesvos, Greece. The lack of analytical expressions for the posterior parameter distributions was overcome using Markov chain Monte Carlo simulations; a convenient way to obtain representative samples of parameter values. The Bayesian calibration resulted in realistic reproduction of the key temporal patterns of the system, offered insights into the degree of information the data contain about model inputs, and also allowed the quantification of the dependence structure among the parameter estimates. Finally, our study uses two synthetic datasets to examine the ability of the updated model to provide estimates of predictive uncertainty for water quality variables of environmental management interest.

  12. Arthroscopic Management of Scaphoid-Trapezium-Trapezoid Joint Arthritis.

    PubMed

    Pegoli, Loris; Pozzi, Alessandro

    2017-11-01

    Scaphoid-trapezium-trapezoid (STT) joint arthritis is a common condition consisting of pain on the radial side of the wrist and base of the thumb, swelling, and tenderness over the STT joint. Common symptoms are loss of grip strength and thumb function. There are several treatments, from symptomatic conservative treatment to surgical solutions, such as arthrodesis, arthroplasties, and prosthesis implant. The role of arthroscopy has grown and is probably the best treatment of this condition. Advantages of arthroscopic management of STT arthritis are faster recovery, better view of the joint during surgery, and possibility of creating less damage to the capsular and ligamentous structures. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Nonstationary envelope process and first excursion probability.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.

  14. On the detection of very high redshift gamma-ray bursts with Swift

    NASA Astrophysics Data System (ADS)

    Salvaterra, R.; Campana, S.; Chincarini, G.; Tagliaferri, G.; Covino, S.

    2007-09-01

    We compute the probability of detecting long gamma-ray bursts (GRBs) at z >= 5 with Swift, assuming that GRBs form preferentially in low-metallicity environments. The model fits both the observed Burst and Transient Source Experiment (BATSE) and Swift GRB differential peak flux distributions well and is consistent with the number of z >= 2.5 detections in the 2-yr Swift data. We find that the probability of observing a burst at z >= 5 becomes larger than 10 per cent for photon fluxes P < 1 ph s-1 cm-2, consistent with the number of confirmed detections. The corresponding fraction of z >= 5 bursts in the Swift catalogue is ~10-30 per cent depending on the adopted metallicity threshold for GRB formation. We propose to use the computed probability as a tool to identify high-redshift GRBs. By jointly considering promptly available information provided by Swift and model results, we can select reliable z >= 5 candidates in a few hours from the BAT detection. We test the procedure against last year Swift data: only three bursts match all our requirements, two being confirmed at z >= 5. Another three possible candidates are picked up by slightly relaxing the adopted criteria. No low-z interloper is found among the six candidates.

  15. Patient and implant survival following joint replacement because of metastatic bone disease

    PubMed Central

    2013-01-01

    Background Patients suffering from a pathological fracture or painful bony lesion because of metastatic bone disease often benefit from a total joint replacement. However, these are large operations in patients who are often weak. We examined the patient survival and complication rates after total joint replacement as the treatment for bone metastasis or hematological diseases of the extremities. Patients and methods 130 patients (mean age 64 (30–85) years, 76 females) received 140 joint replacements due to skeletal metastases (n = 114) or hematological disease (n = 16) during the period 2003–2008. 21 replaced joints were located in the upper extremities and 119 in the lower extremities. Clinical and survival data were extracted from patient files and various registers. Results The probability of patient survival was 51% (95% CI: 42–59) after 6 months, 39% (CI: 31–48) after 12 months, and 29% (CI: 21–37) after 24 months. The following surgical complications were seen (8 of which led to additional surgery): 2–5 hip dislocations (n = 8), deep infection (n = 3), peroneal palsy (n = 2), a shoulder prosthesis penetrating the skin (n = 1), and disassembly of an elbow prosthesis (n = 1). The probability of avoiding all kinds of surgery related to the implanted prosthesis was 94% (CI: 89–99) after 1 year and 92% (CI: 85–98) after 2 years. Conclusion Joint replacement operations because of metastatic bone disease do not appear to have given a poorer rate of patient survival than other types of surgical treatment, and the reoperation rate was low. PMID:23530874

  16. Response statistics of rotating shaft with non-linear elastic restoring forces by path integration

    NASA Astrophysics Data System (ADS)

    Gaidai, Oleg; Naess, Arvid; Dimentberg, Michael

    2017-07-01

    Extreme statistics of random vibrations is studied for a Jeffcott rotor under uniaxial white noise excitation. Restoring force is modelled as elastic non-linear; comparison is done with linearized restoring force to see the force non-linearity effect on the response statistics. While for the linear model analytical solutions and stability conditions are available, it is not generally the case for non-linear system except for some special cases. The statistics of non-linear case is studied by applying path integration (PI) method, which is based on the Markov property of the coupled dynamic system. The Jeffcott rotor response statistics can be obtained by solving the Fokker-Planck (FP) equation of the 4D dynamic system. An efficient implementation of PI algorithm is applied, namely fast Fourier transform (FFT) is used to simulate dynamic system additive noise. The latter allows significantly reduce computational time, compared to the classical PI. Excitation is modelled as Gaussian white noise, however any kind distributed white noise can be implemented with the same PI technique. Also multidirectional Markov noise can be modelled with PI in the same way as unidirectional. PI is accelerated by using Monte Carlo (MC) estimated joint probability density function (PDF) as initial input. Symmetry of dynamic system was utilized to afford higher mesh resolution. Both internal (rotating) and external damping are included in mechanical model of the rotor. The main advantage of using PI rather than MC is that PI offers high accuracy in the probability distribution tail. The latter is of critical importance for e.g. extreme value statistics, system reliability, and first passage probability.

  17. Fracture network of the Ferron Sandstone Member of the Mancos Shale, east-central Utah, USA

    USGS Publications Warehouse

    Condon, S.M.

    2003-01-01

    The fracture network at the outcrop of the Ferron Sandstone Member of the Mancos Shale was studied to gain an understanding of the tectonic history of the region and to contribute data to studies of gas and water transmissivity related to the occurrence and production of coal-bed methane. About 1900 fracture readings were made at 40 coal outcrops and 62 sandstone outcrops in the area from Willow Springs Wash in the south to Farnham dome in the north of the study area in east-central Utah.Two sets of regional, vertical to nearly vertical, systematic face cleats were identified in Ferron coals. A northwest-striking set trends at a mean azimuth of 321??, and a northeast-striking set has a mean azimuth of 55??. Cleats were observed in all coal outcrops examined and are closely spaced and commonly coated with thin films of iron oxide.Two sets of regional, systematic joint sets in sandstone were also identified and have mean azimuths of 321?? and 34??. The joints of each set are planar, long, and extend vertically to nearly vertically through multiple beds; the northeast-striking set is more prevalent than the northwest-striking set. In some places, joints of the northeast-striking set occur in closely spaced clusters, or joint zones, flanked by unjointed rock. Both sets are mineralized with iron oxide and calcite, and the northwest-striking set is commonly tightly cemented, which allowed the northeast-striking set to propagate across it. All cleats and joints of these sets are interpreted as opening-mode (mode I) fractures. Abutting relations indicate that the northwest-striking cleats and joints formed first and were later overprinted by the northeast-striking cleats and joints. Burial curves constructed for the Ferron indicate rapid initial burial after deposition. The Ferron reached a depth of 3000 ft (1000 m) within 5.2 million years (m.y.), and this is considered a minimum depth and time for development of cleats and joints. The Sevier orogeny produced southeast-directed compressional stress at this time and is thought to be the likely mechanism for the northwest-striking systematic cleats and joints. The onset of the Laramide orogeny occurred at about 75 Ma, within 13.7 m.y. of burial, and is thought to be the probable mechanism for development of the northeast-striking systematic cleats and joints. Uplift of the Ferron in the late Tertiary contributed to development of butt cleats and secondary cross-joints and probably enhanced previously formed fracture sets. Using a study of the younger Blackhawk Formation as an analogy, the fracture pattern of the Ferron in the subsurface is probably similar to that at the surface, at least as far west as the Paradise fault and Joe's Valley graben. Farther to the west, on the Wasatch Plateau, the orientations of Ferron fractures may diverge from those measured at the outcrop. ?? 2003 Elsevier B.V. All rights reserved.

  18. Combining Probability Distributions of Wind Waves and Sea Level Variations to Assess Return Periods of Coastal Floods

    NASA Astrophysics Data System (ADS)

    Leijala, U.; Bjorkqvist, J. V.; Pellikka, H.; Johansson, M. M.; Kahma, K. K.

    2017-12-01

    Predicting the behaviour of the joint effect of sea level and wind waves is of great significance due to the major impact of flooding events in densely populated coastal regions. As mean sea level rises, the effect of sea level variations accompanied by the waves will be even more harmful in the future. The main challenge when evaluating the effect of waves and sea level variations is that long time series of both variables rarely exist. Wave statistics are also highly location-dependent, thus requiring wave buoy measurements and/or high-resolution wave modelling. As an initial approximation of the joint effect, the variables may be treated as independent random variables, to achieve the probability distribution of their sum. We present results of a case study based on three probability distributions: 1) wave run-up constructed from individual wave buoy measurements, 2) short-term sea level variability based on tide gauge data, and 3) mean sea level projections based on up-to-date regional scenarios. The wave measurements were conducted during 2012-2014 on the coast of city of Helsinki located in the Gulf of Finland in the Baltic Sea. The short-term sea level distribution contains the last 30 years (1986-2015) of hourly data from Helsinki tide gauge, and the mean sea level projections are scenarios adjusted for the Gulf of Finland. Additionally, we present a sensitivity test based on six different theoretical wave height distributions representing different wave behaviour in relation to sea level variations. As these wave distributions are merged with one common sea level distribution, we can study how the different shapes of the wave height distribution affect the distribution of the sum, and which one of the components is dominating under different wave conditions. As an outcome of the method, we obtain a probability distribution of the maximum elevation of the continuous water mass, which enables a flexible tool for evaluating different risk levels in the current and future climate.

  19. Modeling of Dissipation Element Statistics in Turbulent Non-Premixed Jet Flames

    NASA Astrophysics Data System (ADS)

    Denker, Dominik; Attili, Antonio; Boschung, Jonas; Hennig, Fabian; Pitsch, Heinz

    2017-11-01

    The dissipation element (DE) analysis is a method for analyzing and compartmentalizing turbulent scalar fields. DEs can be described by two parameters, namely the Euclidean distance l between their extremal points and the scalar difference in the respective points Δϕ . The joint probability density function (jPDF) of these two parameters P(Δϕ , l) is expected to suffice for a statistical reconstruction of the scalar field. In addition, reacting scalars show a strong correlation with these DE parameters in both premixed and non-premixed flames. Normalized DE statistics show a remarkable invariance towards changes in Reynolds numbers. This feature of DE statistics was exploited in a Boltzmann-type evolution equation based model for the probability density function (PDF) of the distance between the extremal points P(l) in isotropic turbulence. Later, this model was extended for the jPDF P(Δϕ , l) and then adapted for the use in free shear flows. The effect of heat release on the scalar scales and DE statistics is investigated and an extended model for non-premixed jet flames is introduced, which accounts for the presence of chemical reactions. This new model is validated against a series of DNS of temporally evolving jet flames. European Research Council Project ``Milestone''.

  20. Individual-based model for radiation risk assessment

    NASA Astrophysics Data System (ADS)

    Smirnova, O.

    A mathematical model is developed which enables one to predict the life span probability for mammals exposed to radiation. It relates statistical biometric functions with statistical and dynamic characteristics of an organism's critical system. To calculate the dynamics of the latter, the respective mathematical model is used too. This approach is applied to describe the effects of low level chronic irradiation on mice when the hematopoietic system (namely, thrombocytopoiesis) is the critical one. For identification of the joint model, experimental data on hematopoiesis in nonirradiated and irradiated mice, as well as on mortality dynamics of those in the absence of radiation are utilized. The life span probability and life span shortening predicted by the model agree with corresponding experimental data. Modeling results show the significance of ac- counting the variability of the individual radiosensitivity of critical system cells when estimating the radiation risk. These findings are corroborated by clinical data on persons involved in the elimination of the Chernobyl catastrophe after- effects. All this makes it feasible to use the model for radiation risk assessments for cosmonauts and astronauts on long-term missions such as a voyage to Mars or a lunar colony. In this case the model coefficients have to be determined by making use of the available data for humans. Scenarios for the dynamics of dose accumulation during space flights should also be taken into account.

  1. Bayesian network representing system dynamics in risk analysis of nuclear systems

    NASA Astrophysics Data System (ADS)

    Varuttamaseni, Athi

    2011-12-01

    A dynamic Bayesian network (DBN) model is used in conjunction with the alternating conditional expectation (ACE) regression method to analyze the risk associated with the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed operation in the Zion-1 nuclear power plant. The use of the DBN allows the joint probability distribution to be factorized, enabling the analysis to be done on many simpler network structures rather than on one complicated structure. The construction of the DBN model assumes conditional independence relations among certain key reactor parameters. The choice of parameter to model is based on considerations of the macroscopic balance statements governing the behavior of the reactor under a quasi-static assumption. The DBN is used to relate the peak clad temperature to a set of independent variables that are known to be important in determining the success of the feed and bleed operation. A simple linear relationship is then used to relate the clad temperature to the core damage probability. To obtain a quantitative relationship among different nodes in the DBN, surrogates of the RELAP5 reactor transient analysis code are used. These surrogates are generated by applying the ACE algorithm to output data obtained from about 50 RELAP5 cases covering a wide range of the selected independent variables. These surrogates allow important safety parameters such as the fuel clad temperature to be expressed as a function of key reactor parameters such as the coolant temperature and pressure together with important independent variables such as the scram delay time. The time-dependent core damage probability is calculated by sampling the independent variables from their probability distributions and propagate the information up through the Bayesian network to give the clad temperature. With the knowledge of the clad temperature and the assumption that the core damage probability has a one-to-one relationship to it, we have calculated the core damage probably as a function of transient time. The use of the DBN model in combination with ACE allows risk analysis to be performed with much less effort than if the analysis were done using the standard techniques.

  2. Potential Use of a Bayesian Network for Discriminating Flash Type from Future GOES-R Geostationary Lightning Mapper (GLM) data

    NASA Technical Reports Server (NTRS)

    Solakiewiz, Richard; Koshak, William

    2008-01-01

    Continuous monitoring of the ratio of cloud flashes to ground flashes may provide a better understanding of thunderstorm dynamics, intensification, and evolution, and it may be useful in severe weather warning. The National Lighting Detection Network TM (NLDN) senses ground flashes with exceptional detection efficiency and accuracy over most of the continental United States. A proposed Geostationary Lightning Mapper (GLM) aboard the Geostationary Operational Environmental Satellite (GOES-R) will look at the western hemisphere, and among the lightning data products to be made available will be the fundamental optical flash parameters for both cloud and ground flashes: radiance, area, duration, number of optical groups, and number of optical events. Previous studies have demonstrated that the optical flash parameter statistics of ground and cloud lightning, which are observable from space, are significantly different. This study investigates a Bayesian network methodology for discriminating lightning flash type (ground or cloud) using the lightning optical data and ancillary GOES-R data. A Directed Acyclic Graph (DAG) is set up with lightning as a "root" and data observed by GLM as the "leaves." This allows for a direct calculation of the joint probability distribution function for the lighting type and radiance, area, etc. Initially, the conditional probabilities that will be required can be estimated from the Lightning Imaging Sensor (LIS) and the Optical Transient Detector (OTD) together with NLDN data. Directly manipulating the joint distribution will yield the conditional probability that a lightning flash is a ground flash given the evidence, which consists of the observed lightning optical data [and possibly cloud data retrieved from the GOES-R Advanced Baseline Imager (ABI) in a more mature Bayesian network configuration]. Later, actual GLM and NLDN data can be used to refine the estimates of the conditional probabilities used in the model; i.e., the Bayesian network is a learning network. Methods for efficient calculation of the conditional probabilities (e.g., an algorithm using junction trees), finding data conflicts, goodness of fit, and dealing with missing data will also be addressed.

  3. Seismic velocity structure of the crust and upper mantle beneath the Texas-Gulf of Mexico margin from joint inversion of Ps and Sp receiver functions and surface wave dispersion

    NASA Astrophysics Data System (ADS)

    Agrawal, M.; Pulliam, J.; Sen, M. K.

    2013-12-01

    The seismic structure beneath Texas Gulf Coast Plain (GCP) is determined via velocity analysis of stacked common conversion point (CCP) Ps and Sp receiver functions and surface wave dispersion. The GCP is a portion of a ocean-continental transition zone, or 'passive margin', where seismic imaging of lithospheric Earth structure via passive seismic techniques has been rare. Seismic data from a temporary array of 22 broadband stations, spaced 16-20 km apart, on a ~380-km-long profile from Matagorda Island, a barrier island in the Gulf of Mexico, to Johnson City, Texas were employed to construct a coherent image of the crust and uppermost mantle. CCP stacking was applied to data from teleseismic earthquakes to enhance the signal-to-noise ratios of converted phases, such as Ps phases. An inaccurate velocity model, used for time-to-depth conversion in CCP stacking, may produce higher errors, especially in a region of substantial lateral velocity variations. An accurate velocity model is therefore essential to constructing high quality depth-domain images. To find accurate velocity P- and S-wave models, we applied a joint modeling approach that searches for best-fitting models via simulated annealing. This joint inversion approach, which we call 'multi objective optimization in seismology' (MOOS), simultaneously models Ps receiver functions, Sp receiver functions and group velocity surface wave dispersion curves after assigning relative weights for each objective function. Weights are computed from the standard deviations of the data. Statistical tools such as the posterior parameter correlation matrix and posterior probability density (PPD) function are used to evaluate the constraints that each data type places on model parameters. They allow us to identify portions of the model that are well or poorly constrained.

  4. Identifying Changes in the Probability of High Temperature, High Humidity Heat Wave Events

    NASA Astrophysics Data System (ADS)

    Ballard, T.; Diffenbaugh, N. S.

    2016-12-01

    Understanding how heat waves will respond to climate change is critical for adequate planning and adaptation. While temperature is the primary determinant of heat wave severity, humidity has been shown to play a key role in heat wave intensity with direct links to human health and safety. Here we investigate the individual contributions of temperature and specific humidity to extreme heat wave conditions in recent decades. Using global NCEP-DOE Reanalysis II daily data, we identify regional variability in the joint probability distribution of humidity and temperature. We also identify a statistically significant positive trend in humidity over the eastern U.S. during heat wave events, leading to an increased probability of high humidity, high temperature events. The extent to which we can expect this trend to continue under climate change is complicated due to variability between CMIP5 models, in particular among projections of humidity. However, our results support the notion that heat wave dynamics are characterized by more than high temperatures alone, and understanding and quantifying the various components of the heat wave system is crucial for forecasting future impacts.

  5. Joint-bounded crescentic scars formed by subglacial clast-bed contact forces: Implications for bedrock failure beneath glaciers

    NASA Astrophysics Data System (ADS)

    Krabbendam, M.; Bradwell, T.; Everest, J. D.; Eyles, N.

    2017-08-01

    Glaciers and ice sheets are important agents of bedrock erosion, yet the precise processes of bedrock failure beneath glacier ice are incompletely known. Subglacially formed erosional crescentic markings (crescentic gouges, lunate fractures) on bedrock surfaces occur locally in glaciated areas and comprise a conchoidal fracture dipping down-ice and a steep fracture that faces up-ice. Here we report morphologically distinct crescentic scars that are closely associated with preexisting joints, termed here joint-bounded crescentic scars. These hitherto unreported features are ca. 50-200 mm deep and involve considerably more rock removal than previously described crescentic markings. The joint-bounded crescentic scars were found on abraded rhyolite surfaces recently exposed (< 20 years) beneath a retreating glacier in Iceland, as well as on glacially sculpted Precambrian gneisses in NW Scotland and various Precambrian rocks in Ontario, glaciated during the Late Pleistocene. We suggest a common formation mechanism for these contemporary and relict features, whereby a boulder embedded in basal ice produces a continuously migrating clast-bed contact force as it is dragged over the hard (bedrock) bed. As the ice-embedded boulder approaches a preexisting joint in the bedrock, stress concentrations build up in the bed that exceed the intact rock strength, resulting in conchoidal fracturing and detachment of a crescentic wedge-shaped rock fragment. Subsequent removal of the rock fragment probably involves further fracturing or crushing (comminution) under high contact forces. Formation of joint-bounded crescentic scars is favoured by large boulders at the base of the ice, high basal melting rates, and the presence of preexisting subvertical joints in the bedrock bed. We infer that the relative scarcity of crescentic markings in general on deglaciated surfaces shows that fracturing of intact bedrock below ice is difficult, but that preexisting weaknesses such as joints greatly facilitate rock failure. This implies that models of glacial erosion need to take fracture patterns of bedrock into account.

  6. Snake River fall Chinook salmon life history investigations, annual report 2008

    USGS Publications Warehouse

    Tiffan, Kenneth F.; Connor, William P.; Bellgraph, Brian J.; Buchanan, Rebecca A.

    2010-01-01

    In 2009, we used radio and acoustic telemetry to evaluate the migratory behavior, survival, mortality, and delay of subyearling fall Chinook salmon in the Clearwater River and Lower Granite Reservoir. We released a total of 1,000 tagged hatchery subyearlings at Cherry Lane on the Clearwater River in mid August and we monitored them as they passed downstream through various river and reservoir reaches. Survival through the free-flowing river was high (>0.85) for both radio- and acoustic-tagged fish, but dropped substantially as fish delayed in the Transition Zone and Confluence areas. Estimates of the joint probability of migration and survival through the Transition Zone and Confluence reaches combined were similar for both radio- and acoustic-tagged fish, and ranged from about 0.30 to 0.35. Estimates of the joint probability of delaying and surviving in the combined Transition Zone and Confluence peaked at the beginning of the study, ranging from 0.323 ( SE =NA; radio-telemetry data) to 0.466 ( SE =0.024; acoustic-telemetry data), and then steadily declined throughout the remainder of the study. By the end of October, no live tagged juvenile salmon were detected in either the Transition Zone or the Confluence. As estimates of the probability of delay decreased throughout the study, estimates of the probability of mortality increased, as evidenced by the survival estimate of 0.650 ( SE =0.025) at the end of October (acoustic-telemetry data). Few fish were detected at Lower Granite Dam during our study and even fewer fish passed the dam before PIT-tag monitoring ended at the end of October. Five acoustic-tagged fish passed Lower Granite Dam in October and 12 passed the dam in November based on detections in the dam tailrace; however, too few detections were available to calculate the joint probabilities of migrating and surviving or delaying and surviving. Estimates of the joint probability of migrating and surviving through the reservoir was less than 0.2 based on acoustic-tagged fish. Migration rates of tagged fish were highest in the free-flowing river (median range = 36 to 43 km/d) but were generally less than 6 km/d in the reservoir reaches. In particular, median migration rates of radio-tagged fish through the Transition Zone and Confluence were 3.4 and 5.2 km/d, respectively. Median migration rate for acoustic-tagged fish though the Transition Zone and Confluence combined was 1 km/d.

  7. A second-order closure analysis of turbulent diffusion flames. [combustion physics

    NASA Technical Reports Server (NTRS)

    Varma, A. K.; Fishburne, E. S.; Beddini, R. A.

    1977-01-01

    A complete second-order closure computer program for the investigation of compressible, turbulent, reacting shear layers was developed. The equations for the means and the second order correlations were derived from the time-averaged Navier-Stokes equations and contain third order and higher order correlations, which have to be modeled in terms of the lower-order correlations to close the system of equations. In addition to fluid mechanical turbulence models and parameters used in previous studies of a variety of incompressible and compressible shear flows, a number of additional scalar correlations were modeled for chemically reacting flows, and a typical eddy model developed for the joint probability density function for all the scalars. The program which is capable of handling multi-species, multistep chemical reactions, was used to calculate nonreacting and reacting flows in a hydrogen-air diffusion flame.

  8. Maximum aposteriori joint source/channel coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Gibson, Jerry D.

    1991-01-01

    A maximum aposteriori probability (MAP) approach to joint source/channel coder design is presented in this paper. This method attempts to explore a technique for designing joint source/channel codes, rather than ways of distributing bits between source coders and channel coders. For a nonideal source coder, MAP arguments are used to design a decoder which takes advantage of redundancy in the source coder output to perform error correction. Once the decoder is obtained, it is analyzed with the purpose of obtaining 'desirable properties' of the channel input sequence for improving overall system performance. Finally, an encoder design which incorporates these properties is proposed.

  9. Experimental joint weak measurement on a photon pair as a probe of Hardy's paradox.

    PubMed

    Lundeen, J S; Steinberg, A M

    2009-01-16

    It has been proposed that the ability to perform joint weak measurements on postselected systems would allow us to study quantum paradoxes. These measurements can investigate the history of those particles that contribute to the paradoxical outcome. Here we experimentally perform weak measurements of joint (i.e., nonlocal) observables. In an implementation of Hardy's paradox, we weakly measure the locations of two photons, the subject of the conflicting statements behind the paradox. Remarkably, the resulting weak probabilities verify all of these statements but, at the same time, resolve the paradox.

  10. Structural and mechanical properties of cardiolipin lipid bilayers determined using neutron spin echo, small angle neutron and X-ray scattering, and molecular dynamics simulations

    DOE PAGES

    Pan, Jianjun; Cheng, Xiaolin; Sharp, Melissa; ...

    2014-10-29

    We report that the detailed structural and mechanical properties of a tetraoleoyl cardiolipin (TOCL) bilayer were determined using neutron spin echo (NSE) spectroscopy, small angle neutron and X-ray scattering (SANS and SAXS, respectively), and molecular dynamics (MD) simulations. We used MD simulations to develop a scattering density profile (SDP) model, which was then utilized to jointly refine SANS and SAXS data. In addition to commonly reported lipid bilayer structural parameters, component distributions were obtained, including the volume probability, electron density and neutron scattering length density.

  11. Double ionization of neon in elliptically polarized femtosecond laser fields

    NASA Astrophysics Data System (ADS)

    Kang, HuiPeng; Henrichs, Kevin; Wang, YanLan; Hao, XiaoLei; Eckart, Sebastian; Kunitski, Maksim; Schöffler, Markus; Jahnke, Till; Liu, XiaoJun; Dörner, Reinhard

    2018-06-01

    We present a joint experimental and theoretical investigation of the correlated electron momentum spectra from strong-field double ionization of neon induced by elliptically polarized laser pulses. A significant asymmetry of the electron momentum distributions along the major polarization axis is reported. This asymmetry depends sensitively on the laser ellipticity. Using a three-dimensional semiclassical model, we attribute this asymmetry pattern to the ellipticity-dependent probability distributions of recollision time. Our work demonstrates that, by simply varying the ellipticity, the correlated electron emission can be two-dimensionally controlled and the recolliding electron trajectories can be steered on a subcycle time scale.

  12. Experimental and Coupled-channels Investigation of the Radiative Properties of the N2 c4 (sup 1)Sigma+(sub u) - X (sup 1)Sigma+(sub g) Band System

    NASA Technical Reports Server (NTRS)

    Liu, Xianming; Shemansky, Donald E.; Malone, Charles P.; Johnson, Paul V.; Ajello, Joseph M.; Kanik, Isik; Heays, Alan N.; Lewis, Brenton R.; Gibson, Stephen T.; Stark, Glenn

    2008-01-01

    The emission properties of the N2 c(sup prime)(sub 4) (sup 1)Sigma+(sub u) - Chi (sup 1)Sigma+(sub g) band system have been investigated in a joint experimental and coupled-channels theoretical study. Relative intensities of the c(sup prime)(sub 4) (sup 1)Sigma+(sub u)(0) - Chi (sup 1)Sigma+(sub g)(v(sub i)) transitions, measured via electron-impact-induced emission spectroscopy, are combined with a coupled-channel Schroedinger equation (CSE) model of the N2 molecule, enabling determination of the diabatic electronic transition moment for the c(sup prime)(sub 4) (sup 1)Sigma+(sub u) - Chi (sup 1)Sigma+(sub g) system as a function of internuclear distance. The CSE probabilities are further verified by comparison with a high-resolution experimental spectrum. Spontaneous transition probabilities of the c(sup prime)(sub 4) (sup 1)Sigma+(sub u) - Chi (sup 1)Sigma+(sub g) modeling atmospheric emission, can now be calculated reliably.

  13. Hereditary hemochromatosis is characterized by a clinically definable arthropathy that correlates with iron load.

    PubMed

    Carroll, G J; Breidahl, W H; Bulsara, M K; Olynyk, J K

    2011-01-01

    To determine the frequency and character of arthropathy in hereditary hemochromatosis (HH) and to investigate the relationship between this arthropathy, nodal interphalangeal osteoarthritis, and iron load. Participants were recruited from the community by newspaper advertisement and assigned to diagnostic confidence categories for HH (definite/probable or possible/unlikely). Arthropathy was determined by use of a predetermined clinical protocol, radiographs of the hands of all participants, and radiographs of other joints in which clinical criteria were met. An arthropathy considered typical for HH, involving metacarpophalangeal joints 2-5 and bilateral specified large joints, was observed in 10 of 41 patients with definite or probable HH (24%), all of whom were homozygous for the C282Y mutation in the HFE gene, while only 2 of 62 patients with possible/unlikely HH had such an arthropathy (P=0.0024). Arthropathy in definite/probable HH was more common with increasing age and was associated with ferritin concentrations>1,000 μg/liter at the time of diagnosis (odds ratio 14.0 [95% confidence interval 1.30-150.89], P=0.03). A trend toward more episodes requiring phlebotomy was also observed among those with arthropathy, but this was not statistically significant (odds ratio 1.03 [95% confidence interval 0.99-1.06], P=0.097). There was no significant association between arthropathy in definite/probable HH and a history of intensive physical labor (P=0.12). An arthropathy consistent with that commonly attributed to HH was found to occur in 24% of patients with definite/probable HH. The association observed between this arthropathy, homozygosity for C282Y, and serum ferritin concentrations at the time of diagnosis suggests that iron load is likely to be a major determinant of arthropathy in HH and to be more important than occupational factors. Copyright © 2011 by the American College of Rheumatology.

  14. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, J; Fan, J; Hu, W

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less

  15. The Combined Use of Correlative and Mechanistic Species Distribution Models Benefits Low Conservation Status Species.

    PubMed

    Rougier, Thibaud; Lassalle, Géraldine; Drouineau, Hilaire; Dumoulin, Nicolas; Faure, Thierry; Deffuant, Guillaume; Rochard, Eric; Lambert, Patrick

    2015-01-01

    Species can respond to climate change by tracking appropriate environmental conditions in space, resulting in a range shift. Species Distribution Models (SDMs) can help forecast such range shift responses. For few species, both correlative and mechanistic SDMs were built, but allis shad (Alosa alosa), an endangered anadromous fish species, is one of them. The main purpose of this study was to provide a framework for joint analyses of correlative and mechanistic SDMs projections in order to strengthen conservation measures for species of conservation concern. Guidelines for joint representation and subsequent interpretation of models outputs were defined and applied. The present joint analysis was based on the novel mechanistic model GR3D (Global Repositioning Dynamics of Diadromous fish Distribution) which was parameterized on allis shad and then used to predict its future distribution along the European Atlantic coast under different climate change scenarios (RCP 4.5 and RCP 8.5). We then used a correlative SDM for this species to forecast its distribution across the same geographic area and under the same climate change scenarios. First, projections from correlative and mechanistic models provided congruent trends in probability of habitat suitability and population dynamics. This agreement was preferentially interpreted as referring to the species vulnerability to climate change. Climate change could not be accordingly listed as a major threat for allis shad. The congruence in predicted range limits between SDMs projections was the next point of interest. The difference, when noticed, required to deepen our understanding of the niche modelled by each approach. In this respect, the relative position of the northern range limit between the two methods strongly suggested here that a key biological process related to intraspecific variability was potentially lacking in the mechanistic SDM. Based on our knowledge, we hypothesized that local adaptations to cold temperatures deserved more attention in terms of modelling, but further in conservation planning as well.

  16. The Combined Use of Correlative and Mechanistic Species Distribution Models Benefits Low Conservation Status Species

    PubMed Central

    Rougier, Thibaud; Lassalle, Géraldine; Drouineau, Hilaire; Dumoulin, Nicolas; Faure, Thierry; Deffuant, Guillaume; Rochard, Eric; Lambert, Patrick

    2015-01-01

    Species can respond to climate change by tracking appropriate environmental conditions in space, resulting in a range shift. Species Distribution Models (SDMs) can help forecast such range shift responses. For few species, both correlative and mechanistic SDMs were built, but allis shad (Alosa alosa), an endangered anadromous fish species, is one of them. The main purpose of this study was to provide a framework for joint analyses of correlative and mechanistic SDMs projections in order to strengthen conservation measures for species of conservation concern. Guidelines for joint representation and subsequent interpretation of models outputs were defined and applied. The present joint analysis was based on the novel mechanistic model GR3D (Global Repositioning Dynamics of Diadromous fish Distribution) which was parameterized on allis shad and then used to predict its future distribution along the European Atlantic coast under different climate change scenarios (RCP 4.5 and RCP 8.5). We then used a correlative SDM for this species to forecast its distribution across the same geographic area and under the same climate change scenarios. First, projections from correlative and mechanistic models provided congruent trends in probability of habitat suitability and population dynamics. This agreement was preferentially interpreted as referring to the species vulnerability to climate change. Climate change could not be accordingly listed as a major threat for allis shad. The congruence in predicted range limits between SDMs projections was the next point of interest. The difference, when noticed, required to deepen our understanding of the niche modelled by each approach. In this respect, the relative position of the northern range limit between the two methods strongly suggested here that a key biological process related to intraspecific variability was potentially lacking in the mechanistic SDM. Based on our knowledge, we hypothesized that local adaptations to cold temperatures deserved more attention in terms of modelling, but further in conservation planning as well. PMID:26426280

  17. Characterization of friction stir welded joint of low nickel austenitic stainless steel and modified ferritic stainless steel

    NASA Astrophysics Data System (ADS)

    Mondal, Mounarik; Das, Hrishikesh; Ahn, Eun Yeong; Hong, Sung Tae; Kim, Moon-Jo; Han, Heung Nam; Pal, Tapan Kumar

    2017-09-01

    Friction stir welding (FSW) of dissimilar stainless steels, low nickel austenitic stainless steel and 409M ferritic stainless steel, is experimentally investigated. Process responses during FSW and the microstructures of the resultant dissimilar joints are evaluated. Material flow in the stir zone is investigated in detail by elemental mapping. Elemental mapping of the dissimilar joints clearly indicates that the material flow pattern during FSW depends on the process parameter combination. Dynamic recrystallization and recovery are also observed in the dissimilar joints. Among the two different stainless steels selected in the present study, the ferritic stainless steels shows more severe dynamic recrystallization, resulting in a very fine microstructure, probably due to the higher stacking fault energy.

  18. Lithospheric architecture of NE China from joint Inversions of receiver functions and surface wave dispersion through Bayesian optimisation

    NASA Astrophysics Data System (ADS)

    Sebastian, Nita; Kim, Seongryong; Tkalčić, Hrvoje; Sippl, Christian

    2017-04-01

    The purpose of this study is to develop an integrated inference on the lithospheric structure of NE China using three passive seismic networks comprised of 92 stations. The NE China plain consists of complex lithospheric domains characterised by the co-existence of complex geodynamic processes such as crustal thinning, active intraplate cenozoic volcanism and low velocity anomalies. To estimate lithospheric structures with greater detail, we chose to perform the joint inversion of independent data sets such as receiver functions and surface wave dispersion curves (group and phase velocity). We perform a joint inversion based on principles of Bayesian transdimensional optimisation techniques (Kim etal., 2016). Unlike in the previous studies of NE China, the complexity of the model is determined from the data in the first stage of the inversion, and the data uncertainty is computed based on Bayesian statistics in the second stage of the inversion. The computed crustal properties are retrieved from an ensemble of probable models. We obtain major structural inferences with well constrained absolute velocity estimates, which are vital for inferring properties of the lithosphere and bulk crustal Vp/Vs ratio. The Vp/Vs estimate obtained from joint inversions confirms the high Vp/Vs ratio ( 1.98) obtained using the H-Kappa method beneath some stations. Moreover, we could confirm the existence of a lower crustal velocity beneath several stations (eg: station SHS) within the NE China plain. Based on these findings we attempt to identify a plausible origin for structural complexity. We compile a high-resolution 3D image of the lithospheric architecture of the NE China plain.

  19. Cultural Consensus Theory: Aggregating Continuous Responses in a Finite Interval

    NASA Astrophysics Data System (ADS)

    Batchelder, William H.; Strashny, Alex; Romney, A. Kimball

    Cultural consensus theory (CCT) consists of cognitive models for aggregating responses of "informants" to test items about some domain of their shared cultural knowledge. This paper develops a CCT model for items requiring bounded numerical responses, e.g. probability estimates, confidence judgments, or similarity judgments. The model assumes that each item generates a latent random representation in each informant, with mean equal to the consensus answer and variance depending jointly on the informant and the location of the consensus answer. The manifest responses may reflect biases of the informants. Markov Chain Monte Carlo (MCMC) methods were used to estimate the model, and simulation studies validated the approach. The model was applied to an existing cross-cultural dataset involving native Japanese and English speakers judging the similarity of emotion terms. The results sharpened earlier studies that showed that both cultures appear to have very similar cognitive representations of emotion terms.

  20. Factors related to the joint probability of flooding on paired streams

    USGS Publications Warehouse

    Koltun, G.F.; Sherwood, J.M.

    1998-01-01

    The factors related to the joint probabilty of flooding on paired streams were investigated and quantified to provide information to aid in the design of hydraulic structures where the joint probabilty of flooding is an element of the design criteria. Stream pairs were considered to have flooded jointly at the design-year flood threshold (corresponding to the 2-, 10-, 25-, or 50-year instantaneous peak streamflow) if peak streamflows at both streams in the pair were observed or predicted to have equaled or exceeded the threshold on a given calendar day. Daily mean streamflow data were used as a substitute for instantaneous peak streamflow data to determine which flood thresholds were equaled or exceeded on any given day. Instantaneous peak streamflow data, when available, were used preferentially to assess flood-threshold exceedance. Daily mean streamflow data for each stream were paired with concurrent daily mean streamflow data at the other streams. Observed probabilities of joint flooding, determined for the 2-, 10-, 25-, and 50-year flood thresholds, were computed as the ratios of the total number of days when streamflows at both streams concurrently equaled or exceeded their flood thresholds (events) to the total number of days where streamflows at either stream equaled or exceeded its flood threshold (trials). A combination of correlation analyses, graphical analyses, and logistic-regression analyses were used to identify and quantify factors associated with the observed probabilities of joint flooding (event-trial ratios). The analyses indicated that the distance between drainage area centroids, the ratio of the smaller to larger drainage area, the mean drainage area, and the centroid angle adjusted 30 degrees were the basin characteristics most closely associated with the joint probabilty of flooding on paired streams in Ohio. In general, the analyses indicated that the joint probabilty of flooding decreases with an increase in centroid distance and increases with increases in drainage area ratio, mean drainage area, and centroid angle adjusted 30 degrees. Logistic-regression equations were developed, which can be used to estimate the probability that streamflows at two streams jointly equal or exceed the 2-year flood threshold given that the streamflow at one of the two streams equals or exceeds the 2-year flood threshold. The logistic-regression equations are applicable to stream pairs in Ohio (and border areas of adjacent states) that are unregulated, free of significant urban influences, and have characteristics similar to those of the 304 gaged stream pairs used in the logistic-regression analyses. Contingency tables were constructed and analyzed to provide information about the bivariate distribution of floods on paired streams. The contingency tables showed that the percentage of trials in which both streams in the pair concurrently flood at identical recurrence-interval ranges generally increased as centroid distances decreased and was greatest for stream pairs with adjusted centroid angles greater than or equal to 60 degrees and drainage area ratios greater than or equal to 0.01. Also, as centroid distance increased, streamflow at one stream in the pair was more likely to be in a less than 2-year recurrence-interval range when streamflow at the second stream was in a 2-year or greater recurrence-interval range.

  1. Quantum probability assignment limited by relativistic causality.

    PubMed

    Han, Yeong Deok; Choi, Taeseung

    2016-03-14

    Quantum theory has nonlocal correlations, which bothered Einstein, but found to satisfy relativistic causality. Correlation for a shared quantum state manifests itself, in the standard quantum framework, by joint probability distributions that can be obtained by applying state reduction and probability assignment that is called Born rule. Quantum correlations, which show nonlocality when the shared state has an entanglement, can be changed if we apply different probability assignment rule. As a result, the amount of nonlocality in quantum correlation will be changed. The issue is whether the change of the rule of quantum probability assignment breaks relativistic causality. We have shown that Born rule on quantum measurement is derived by requiring relativistic causality condition. This shows how the relativistic causality limits the upper bound of quantum nonlocality through quantum probability assignment.

  2. Water shortage risk assessment considering large-scale regional transfers: a copula-based uncertainty case study in Lunan, China.

    PubMed

    Gao, Xueping; Liu, Yinzhu; Sun, Bowen

    2018-06-05

    The risk of water shortage caused by uncertainties, such as frequent drought, varied precipitation, multiple water resources, and different water demands, brings new challenges to the water transfer projects. Uncertainties exist for transferring water and local surface water; therefore, the relationship between them should be thoroughly studied to prevent water shortage. For more effective water management, an uncertainty-based water shortage risk assessment model (UWSRAM) is developed to study the combined effect of multiple water resources and analyze the shortage degree under uncertainty. The UWSRAM combines copula-based Monte Carlo stochastic simulation and the chance-constrained programming-stochastic multiobjective optimization model, using the Lunan water-receiving area in China as an example. Statistical copula functions are employed to estimate the joint probability of available transferring water and local surface water and sampling from the multivariate probability distribution, which are used as inputs for the optimization model. The approach reveals the distribution of water shortage and is able to emphasize the importance of improving and updating transferring water and local surface water management, and examine their combined influence on water shortage risk assessment. The possible available water and shortages can be calculated applying the UWSRAM, also with the corresponding allocation measures under different water availability levels and violating probabilities. The UWSRAM is valuable for mastering the overall multi-water resource and water shortage degree, adapting to the uncertainty surrounding water resources, establishing effective water resource planning policies for managers and achieving sustainable development.

  3. Uncertainty quantification of reaction mechanisms accounting for correlations introduced by rate rules and fitted Arrhenius parameters

    DOE PAGES

    Prager, Jens; Najm, Habib N.; Sargsyan, Khachik; ...

    2013-02-23

    We study correlations among uncertain Arrhenius rate parameters in a chemical model for hydrocarbon fuel-air combustion. We consider correlations induced by the use of rate rules for modeling reaction rate constants, as well as those resulting from fitting rate expressions to empirical measurements arriving at a joint probability density for all Arrhenius parameters. We focus on homogeneous ignition in a fuel-air mixture at constant-pressure. We also outline a general methodology for this analysis using polynomial chaos and Bayesian inference methods. Finally, we examine the uncertainties in both the Arrhenius parameters and in predicted ignition time, outlining the role of correlations,more » and considering both accuracy and computational efficiency.« less

  4. Contributory fault and level of personal injury to drivers involved in head-on collisions: Application of copula-based bivariate ordinal models.

    PubMed

    Wali, Behram; Khattak, Asad J; Xu, Jingjing

    2018-01-01

    The main objective of this study is to simultaneously investigate the degree of injury severity sustained by drivers involved in head-on collisions with respect to fault status designation. This is complicated to answer due to many issues, one of which is the potential presence of correlation between injury outcomes of drivers involved in the same head-on collision. To address this concern, we present seemingly unrelated bivariate ordered response models by analyzing the joint injury severity probability distribution of at-fault and not-at-fault drivers. Moreover, the assumption of bivariate normality of residuals and the linear form of stochastic dependence implied by such models may be unduly restrictive. To test this, Archimedean copula structures and normal mixture marginals are integrated into the joint estimation framework, which can characterize complex forms of stochastic dependencies and non-normality in residual terms. The models are estimated using 2013 Virginia police reported two-vehicle head-on collision data, where exactly one driver is at-fault. The results suggest that both at-fault and not-at-fault drivers sustained serious/fatal injuries in 8% of crashes, whereas, in 4% of the cases, the not-at-fault driver sustained a serious/fatal injury with no injury to the at-fault driver at all. Furthermore, if the at-fault driver is fatigued, apparently asleep, or has been drinking the not-at-fault driver is more likely to sustain a severe/fatal injury, controlling for other factors and potential correlations between the injury outcomes. While not-at-fault vehicle speed affects injury severity of at-fault driver, the effect is smaller than the effect of at-fault vehicle speed on at-fault injury outcome. Contrarily, and importantly, the effect of at-fault vehicle speed on injury severity of not-at-fault driver is almost equal to the effect of not-at-fault vehicle speed on injury outcome of not-at-fault driver. Compared to traditional ordered probability models, the study provides evidence that copula based bivariate models can provide more reliable estimates and richer insights. Practical implications of the results are discussed. Published by Elsevier Ltd.

  5. An integrated logit model for contamination event detection in water distribution systems.

    PubMed

    Housh, Mashor; Ostfeld, Avi

    2015-05-15

    The problem of contamination event detection in water distribution systems has become one of the most challenging research topics in water distribution systems analysis. Current attempts for event detection utilize a variety of approaches including statistical, heuristics, machine learning, and optimization methods. Several existing event detection systems share a common feature in which alarms are obtained separately for each of the water quality indicators. Unifying those single alarms from different indicators is usually performed by means of simple heuristics. A salient feature of the current developed approach is using a statistically oriented model for discrete choice prediction which is estimated using the maximum likelihood method for integrating the single alarms. The discrete choice model is jointly calibrated with other components of the event detection system framework in a training data set using genetic algorithms. The fusing process of each indicator probabilities, which is left out of focus in many existing event detection system models, is confirmed to be a crucial part of the system which could be modelled by exploiting a discrete choice model for improving its performance. The developed methodology is tested on real water quality data, showing improved performances in decreasing the number of false positive alarms and in its ability to detect events with higher probabilities, compared to previous studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Posterior probability of linkage and maximal lod score.

    PubMed

    Génin, E; Martinez, M; Clerget-Darpoux, F

    1995-01-01

    To detect linkage between a trait and a marker, Morton (1955) proposed to calculate the lod score z(theta 1) at a given value theta 1 of the recombination fraction. If z(theta 1) reaches +3 then linkage is concluded. However, in practice, lod scores are calculated for different values of the recombination fraction between 0 and 0.5 and the test is based on the maximum value of the lod score Zmax. The impact of this deviation of the test on the probability that in fact linkage does not exist, when linkage was concluded, is documented here. This posterior probability of no linkage can be derived by using Bayes' theorem. It is less than 5% when the lod score at a predetermined theta 1 is used for the test. But, for a Zmax of +3, we showed that it can reach 16.4%. Thus, considering a composite alternative hypothesis instead of a single one decreases the reliability of the test. The reliability decreases rapidly when Zmax is less than +3. Given a Zmax of +2.5, there is a 33% chance that linkage does not exist. Moreover, the posterior probability depends not only on the value of Zmax but also jointly on the family structures and on the genetic model. For a given Zmax, the chance that linkage exists may then vary.

  7. Solid images generated from UAVs to analyze areas affected by rock falls

    NASA Astrophysics Data System (ADS)

    Giordan, Daniele; Manconi, Andrea; Allasia, Paolo; Baldo, Marco

    2015-04-01

    The study of rock fall affected areas is usually based on the recognition of principal joints families and the localization of potential instable sectors. This requires the acquisition of field data, although as the areas are barely accessible and field inspections are often very dangerous. For this reason, remote sensing systems can be considered as suitable alternative. Recently, Unmanned Aerial Vehicles (UAVs) have been proposed as platform to acquire the necessary information. Indeed, mini UAVs (in particular in the multi-rotors configuration) provide versatility for the acquisition from different points of view a large number of high resolution optical images, which can be used to generate high resolution digital models relevant to the study area. By considering the recent development of powerful user-friendly software and algorithms to process images acquired from UAVs, there is now a need to establish robust methodologies and best-practice guidelines for correct use of 3D models generated in the context of rock fall scenarios. In this work, we show how multi-rotor UAVs can be used to survey areas by rock fall during real emergency contexts. We present two examples of application located in northwestern Italy: the San Germano rock fall (Piemonte region) and the Moneglia rock fall (Liguria region). We acquired data from both terrestrial LiDAR and UAV, in order to compare digital elevation models generated with different remote sensing approaches. We evaluate the volume of the rock falls, identify the areas potentially unstable, and recognize the main joints families. The use on is not so developed but probably this approach can be considered the better solution for a structural investigation of large rock walls. We propose a methodology that jointly considers the Structure from Motion (SfM) approach for the generation of 3D solid images, and a geotechnical analysis for the identification of joint families and potential failure planes.

  8. Automatic lung tumor segmentation on PET/CT images using fuzzy Markov random field model.

    PubMed

    Guo, Yu; Feng, Yuanming; Sun, Jian; Zhang, Ning; Lin, Wang; Sa, Yu; Wang, Ping

    2014-01-01

    The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  9. 3D joint inversion modeling of the lithospheric density structure based on gravity, geoid and topography data — Application to the Alborz Mountains (Iran) and South Caspian Basin region

    NASA Astrophysics Data System (ADS)

    Motavalli-Anbaran, Seyed-Hani; Zeyen, Hermann; Ebrahimzadeh Ardestani, Vahid

    2013-02-01

    We present a 3D algorithm to obtain the density structure of the lithosphere from joint inversion of free air gravity, geoid and topography data based on a Bayesian approach with Gaussian probability density functions. The algorithm delivers the crustal and lithospheric thicknesses and the average crustal density. Stabilization of the inversion process may be obtained through parameter damping and smoothing as well as use of a priori information like crustal thicknesses from seismic profiles. The algorithm is applied to synthetic models in order to demonstrate its usefulness. A real data application is presented for the area of northern Iran (with the Alborz Mountains as main target) and the South Caspian Basin. The resulting model shows an important crustal root (up to 55 km) under the Alborz Mountains and a thin crust (ca. 30 km) under the southernmost South Caspian Basin thickening northward to the Apsheron-Balkan Sill to 45 km. Central and NW Iran is underlain by a thin lithosphere (ca. 90-100 km). The lithosphere thickens under the South Caspian Basin until the Apsheron-Balkan Sill where it reaches more than 240 km. Under the stable Turan platform, we find a lithospheric thickness of 160-180 km.

  10. Three-Dimensional Structural and Hydrologic Evolution of Sant Corneli Anticline, a Fault-Cored Fold in the Central Spanish Pyrenees

    NASA Astrophysics Data System (ADS)

    Shackleton, J. R.; Cooke, M. L.

    2005-12-01

    The Sant Corneli Anticline is a well-exposed example of a fault-cored fold whose hydrologic evolution and structural development are directly linked. The E-W striking anticline is ~ 5 km wide with abrupt westerly plunge, and formed in response to thrusting associated with the upper Cretaceous to Miocene collision of Iberia with Europe. The fold's core of fractured carbonates contains a variety of west dipping normal faults with meter to decameter scale displacement and abundant calcite fill. This carbonate unit is capped by a marl unit with low angle, calcite filled normal faults. The marl unit is overlain by clastic syn-tectonic strata whose sedimentary architecture records limb rotation during the evolution of the fold. The syn-tectonic strata contain a variety of joint sets that record the stresses before, during, and possibly after fold growth. Faulting in the marl and calcite-filled joints in the syn-tectonic strata suggest that normal faults within the carbonate core of the fold eventually breached the overlying marl unit. This breach may have connected the joints of the syn-tectonic strata to the underlying carbonate reservoir and eliminated previous compartmentalization of fluids. Furthermore, breaching of the marl units probably enhanced joint formation in the overlying syn-tectonic strata. Future geochemical studies of calcite compositions in the three units will address this hypothesis. Preliminary mapping of joint sets in the syn-tectonic strata reveal a multistage history of jointing. Early bed-perpendicular joints healed by calcite strike NE-SW, parallel to normal faults in the underlying carbonates, and may be related to an early regional extensional event. Younger healed bed-perpendicular joints cross cut the NE-SW striking set, and are closer to N-S in strike: these joints are interpreted to represent the initial stages of folding. Decameter scale, bed perpendicular, unfilled fractures that are sub-parallel to strike probably represent small joints and faults that formed in response to outer arc extension during folding. Many filled, late stage joints strike sub-parallel to, and increase in frequency near, normal faults and transverse structures observed in the carbonate fold core. This suggests that faulting in the underlying carbonates and marls significantly affected the joint patterns in the syn-tectonic strata. Preliminary three-dimensional finite element restorations using Dynel have allowed us to test our hypotheses and constrain the timing of jointing and marl breach.

  11. Comparison of Bootstrapping and Markov Chain Monte Carlo for Copula Analysis of Hydrological Droughts

    NASA Astrophysics Data System (ADS)

    Yang, P.; Ng, T. L.; Yang, W.

    2015-12-01

    Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC yield the same level of performance, and for medium sample sizes (~100), bootstrapping is better. For cases with a large sample size (~200), there is little difference between the CIs generated using bootstrapping and MCMC regardless of whether or not an informative prior exists.

  12. Malignant Peritoneal Mesothelioma: Prognostic Factors and Oncologic Outcome Analysis

    PubMed Central

    Magge, Deepa; Zenati, Mazen S.; Austin, Frances; Mavanur, Arun; Sathaiah, Magesh; Ramalingam, Lekshmi; Jones, Heather; Zureikat, Amer H.; Holtzman, Matthew; Ahrendt, Steven; Pingpank, James; Zeh, Herbert J.; Bartlett, David L.; Choudry, Haroon A.

    2014-01-01

    Background Most patients with malignant peritoneal mesothelioma (MPM) present with late-stage, unresectable disease that responds poorly to systemic chemotherapy while, at the same time, effective targeted therapies are lacking. We assessed the efficacy of cytoreductive surgery (CRS) and hyperthermic intraperitoneal chemoperfusion (HIPEC) in MPM. Methods We prospectively analyzed 65 patients with MPM undergoing CRS/HIPEC between 2001 and 2010. Kaplan–Meier survival curves and multivariate Cox-regression models identified prognostic factors affecting oncologic outcomes. Results Adequate CRS was achieved in 56 patients (CC-0 = 35; CC-1 = 21), and median simplified peritoneal cancer index (SPCI) was 12. Pathologic assessment revealed predominantly epithelioid histology (81 %) and biphasic histology (8 %), while lymph node involvement was uncommon (8 %). Major postoperative morbidity (grade III/IV) occurred in 23 patients (35 %), and 60-day mortality rate was 6 %. With median follow-up of 37 months, median overall survival was 46.2 months, with 1-, 2-, and 5-year overall survival probability of 77, 57, and 39 %, respectively. Median progression-free survival was 13.9 months, with 1-, 2-, and 5-year disease failure probability of 47, 68, and 83 %, respectively. In a multivariate Cox-regression model, age at surgery, SPCI >15, incomplete cytoreduction (CC-2/3), aggressive histology (epithelioid, biphasic), and postoperative sepsis were joint significant predictors of poor survival (chi square = 42.8; p = 0.00001), while age at surgery, SPCI >15, incomplete cytoreduction (CC-2/3), and aggressive histology (epithelioid, biphasic) were joint significant predictors of disease progression (Chi square = 30.6; p = 0.00001). Conclusions Tumor histology, disease burden, and the ability to achieve adequate surgical cytoreduction are essential prognostic factors in MPM patients undergoing CRS/HIPEC. PMID:24322529

  13. Integrated drought risk assessment of multi-hazard-affected bodies based on copulas in the Taoerhe Basin, China

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Zhang, Jiquan; Guo, Enliang; Alu, Si; Li, Danjun; Ha, Si; Dong, Zhenhua

    2018-02-01

    Along with global warming, drought disasters are occurring more frequently and are seriously affecting normal life and food security in China. Drought risk assessments are necessary to provide support for local governments. This study aimed to establish an integrated drought risk model based on the relation curve of drought joint probabilities and drought losses of multi-hazard-affected bodies. First, drought characteristics, including duration and severity, were classified using the 1953-2010 precipitation anomaly in the Taoerhe Basin based on run theory, and their marginal distributions were identified by exponential and Gamma distributions, respectively. Then, drought duration and severity were related to construct a joint probability distribution based on the copula function. We used the EPIC (Environmental Policy Integrated Climate) model to simulate maize yield and historical data to calculate the loss rates of agriculture, industry, and animal husbandry in the study area. Next, we constructed vulnerability curves. Finally, the spatial distributions of drought risk for 10-, 20-, and 50-year return periods were expressed using inverse distance weighting. Our results indicate that the spatial distributions of the three return periods are consistent. The highest drought risk is in Ulanhot, and the duration and severity there were both highest. This means that higher drought risk corresponds to longer drought duration and larger drought severity, thus providing useful information for drought and water resource management. For 10-, 20-, and 50-year return periods, the drought risk values ranged from 0.41 to 0.53, 0.45 to 0.59, and 0.50 to 0.67, respectively. Therefore, when the return period increases, the drought risk increases.

  14. A joint latent class model for classifying severely hemorrhaging trauma patients.

    PubMed

    Rahbar, Mohammad H; Ning, Jing; Choi, Sangbum; Piao, Jin; Hong, Chuan; Huang, Hanwen; Del Junco, Deborah J; Fox, Erin E; Rahbar, Elaheh; Holcomb, John B

    2015-10-24

    In trauma research, "massive transfusion" (MT), historically defined as receiving ≥10 units of red blood cells (RBCs) within 24 h of admission, has been routinely used as a "gold standard" for quantifying bleeding severity. Due to early in-hospital mortality, however, MT is subject to survivor bias and thus a poorly defined criterion to classify bleeding trauma patients. Using the data from a retrospective trauma transfusion study, we applied a latent-class (LC) mixture model to identify severely hemorrhaging (SH) patients. Based on the joint distribution of cumulative units of RBCs and binary survival outcome at 24 h of admission, we applied an expectation-maximization (EM) algorithm to obtain model parameters. Estimated posterior probabilities were used for patients' classification and compared with the MT rule. To evaluate predictive performance of the LC-based classification, we examined the role of six clinical variables as predictors using two separate logistic regression models. Out of 471 trauma patients, 211 (45 %) were MT, while our latent SH classifier identified only 127 (27 %) of patients as SH. The agreement between the two classification methods was 73 %. A non-ignorable portion of patients (17 out of 68, 25 %) who died within 24 h were not classified as MT but the SH group included 62 patients (91 %) who died during the same period. Our comparison of the predictive models based on MT and SH revealed significant differences between the coefficients of potential predictors of patients who may be in need of activation of the massive transfusion protocol. The traditional MT classification does not adequately reflect transfusion practices and outcomes during the trauma reception and initial resuscitation phase. Although we have demonstrated that joint latent class modeling could be used to correct for potential bias caused by misclassification of severely bleeding patients, improvement in this approach could be made in the presence of time to event data from prospective studies.

  15. [Conditional probability analysis between tinnitus and comorbidities in patients attending the National Rehabilitation Institute-LGII in the period 2012-2013].

    PubMed

    Gómez Toledo, Verónica; Gutiérrez Farfán, Ileana; Verduzco-Mendoza, Antonio; Arch-Tirado, Emilio

    Tinnitus is defined as the conscious perception of a sensation of sound that occurs in the absence of an external stimulus. This audiological symptom affects 7% to 19% of the adult population. The aim of this study is to describe the associated comorbidities present in patients with tinnitus usingjoint and conditional probability analysis. Patients of both genders, diagnosed with unilateral or bilateral tinnitus, aged between 20 and 45 years, and had a full computerised medical record, were selected. Study groups were formed on the basis of the following clinical aspects: 1) audiological findings; 2) vestibular findings; 3) comorbidities such as, temporomandibular dysfunction, tubal dysfunction, otosclerosis and, 4) triggering factors of tinnitus noise exposure, respiratory tract infection, use of ototoxic and/or drugs. Of the patients with tinnitus, 27 (65%) reported hearing loss, 11 (26.19%) temporomandibular dysfunction, and 11 (26.19%) with vestibular disorders. When performing the joint probability analysis, it was found that the probability that a patient with tinnitus having hearing loss was 2742 0.65, and 2042 0.47 for bilateral type. The result for P (A ∩ B)=30%. Bayes' theorem P (AiB) = P(Ai∩B)P(B) was used, and various probabilities were calculated. Therefore, in patients with temporomandibulardysfunction and vestibular disorders, a posterior probability of P (Aі/B)=31.44% was calculated. Consideration should be given to the joint and conditional probability approach as tools for the study of different pathologies. Copyright © 2016 Academia Mexicana de Cirugía A.C. Publicado por Masson Doyma México S.A. All rights reserved.

  16. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  17. Incorporating sequence information into the scoring function: a hidden Markov model for improved peptide identification.

    PubMed

    Khatun, Jainab; Hamlett, Eric; Giddings, Morgan C

    2008-03-01

    The identification of peptides by tandem mass spectrometry (MS/MS) is a central method of proteomics research, but due to the complexity of MS/MS data and the large databases searched, the accuracy of peptide identification algorithms remains limited. To improve the accuracy of identification we applied a machine-learning approach using a hidden Markov model (HMM) to capture the complex and often subtle links between a peptide sequence and its MS/MS spectrum. Our model, HMM_Score, represents ion types as HMM states and calculates the maximum joint probability for a peptide/spectrum pair using emission probabilities from three factors: the amino acids adjacent to each fragmentation site, the mass dependence of ion types and the intensity dependence of ion types. The Viterbi algorithm is used to calculate the most probable assignment between ion types in a spectrum and a peptide sequence, then a correction factor is added to account for the propensity of the model to favor longer peptides. An expectation value is calculated based on the model score to assess the significance of each peptide/spectrum match. We trained and tested HMM_Score on three data sets generated by two different mass spectrometer types. For a reference data set recently reported in the literature and validated using seven identification algorithms, HMM_Score produced 43% more positive identification results at a 1% false positive rate than the best of two other commonly used algorithms, Mascot and X!Tandem. HMM_Score is a highly accurate platform for peptide identification that works well for a variety of mass spectrometer and biological sample types. The program is freely available on ProteomeCommons via an OpenSource license. See http://bioinfo.unc.edu/downloads/ for the download link.

  18. Multi-variate joint PDF for non-Gaussianities: exact formulation and generic approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verde, Licia; Jimenez, Raul; Alvarez-Gaume, Luis

    2013-06-01

    We provide an exact expression for the multi-variate joint probability distribution function of non-Gaussian fields primordially arising from local transformations of a Gaussian field. This kind of non-Gaussianity is generated in many models of inflation. We apply our expression to the non-Gaussianity estimation from Cosmic Microwave Background maps and the halo mass function where we obtain analytical expressions. We also provide analytic approximations and their range of validity. For the Cosmic Microwave Background we give a fast way to compute the PDF which is valid up to more than 7σ for f{sub NL} values (both true and sampled) not ruledmore » out by current observations, which consists of expressing the PDF as a combination of bispectrum and trispectrum of the temperature maps. The resulting expression is valid for any kind of non-Gaussianity and is not limited to the local type. The above results may serve as the basis for a fully Bayesian analysis of the non-Gaussianity parameter.« less

  19. Feature Screening in Ultrahigh Dimensional Cox's Model.

    PubMed

    Yang, Guangren; Yu, Ye; Li, Runze; Buu, Anne

    Survival data with ultrahigh dimensional covariates such as genetic markers have been collected in medical studies and other fields. In this work, we propose a feature screening procedure for the Cox model with ultrahigh dimensional covariates. The proposed procedure is distinguished from the existing sure independence screening (SIS) procedures (Fan, Feng and Wu, 2010, Zhao and Li, 2012) in that the proposed procedure is based on joint likelihood of potential active predictors, and therefore is not a marginal screening procedure. The proposed procedure can effectively identify active predictors that are jointly dependent but marginally independent of the response without performing an iterative procedure. We develop a computationally effective algorithm to carry out the proposed procedure and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. That is, with the probability tending to one, the selected variable set includes the actual active predictors. We conduct Monte Carlo simulation to evaluate the finite sample performance of the proposed procedure and further compare the proposed procedure and existing SIS procedures. The proposed methodology is also demonstrated through an empirical analysis of a real data example.

  20. Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling.

    PubMed

    Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F

    2015-01-01

    Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.

  1. Optimal Joint Remote State Preparation of Arbitrary Equatorial Multi-qudit States

    NASA Astrophysics Data System (ADS)

    Cai, Tao; Jiang, Min

    2017-03-01

    As an important communication technology, quantum information transmission plays an important role in the future network communication. It involves two kinds of transmission ways: quantum teleportation and remote state preparation. In this paper, we put forward a new scheme for optimal joint remote state preparation (JRSP) of an arbitrary equatorial two-qudit state with hybrid dimensions. Moreover, the receiver can reconstruct the target state with 100 % success probability in a deterministic manner via two spatially separated senders. Based on it, we can extend it to joint remote preparation of arbitrary equatorial multi-qudit states with hybrid dimensions using the same strategy.

  2. Implicit Learning of Predictive Relationships in Three-element Visual Sequences by Young and Old Adults

    PubMed Central

    Howard, James H.; Howard, Darlene V.; Dennis, Nancy A.; Kelly, Andrew J.

    2008-01-01

    Knowledge of sequential relationships enables future events to be anticipated and processed efficiently. Research with the serial reaction time task (SRTT) has shown that sequence learning often occurs implicitly without effort or awareness. Here we report four experiments that use a triplet-learning task (TLT) to investigate sequence learning in young and older adults. In the TLT people respond only to the last target event in a series of discrete, three-event sequences or triplets. Target predictability is manipulated by varying the triplet frequency (joint probability) and/or the statistical relationships (conditional probabilities) among events within the triplets. Results revealed that both groups learned, though older adults showed less learning of both joint and conditional probabilities. Young people used the statistical information in both cues, but older adults relied primarily on information in the second cue alone. We conclude that the TLT complements and extends the SRTT and other tasks by offering flexibility in the kinds of sequential statistical regularities that may be studied as well as by controlling event timing and eliminating motor response sequencing. PMID:18763897

  3. Understanding seasonal variability of uncertainty in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Li, M.; Wang, Q. J.

    2012-04-01

    Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.

  4. Evidence-based Diagnostics: Adult Septic Arthritis

    PubMed Central

    Carpenter, Christopher R.; Schuur, Jeremiah D.; Everett, Worth W.; Pines, Jesse M.

    2011-01-01

    Background Acutely swollen or painful joints are common complaints in the emergency department (ED). Septic arthritis in adults is a challenging diagnosis, but prompt differentiation of a bacterial etiology is crucial to minimize morbidity and mortality. Objectives The objective was to perform a systematic review describing the diagnostic characteristics of history, physical examination, and bedside laboratory tests for nongonococcal septic arthritis. A secondary objective was to quantify test and treatment thresholds using derived estimates of sensitivity and specificity, as well as best-evidence diagnostic and treatment risks and anticipated benefits from appropriate therapy. Methods Two electronic search engines (PUBMED and EMBASE) were used in conjunction with a selected bibliography and scientific abstract hand search. Inclusion criteria included adult trials of patients presenting with monoarticular complaints if they reported sufficient detail to reconstruct partial or complete 2 × 2 contingency tables for experimental diagnostic test characteristics using an acceptable criterion standard. Evidence was rated by two investigators using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS). When more than one similarly designed trial existed for a diagnostic test, meta-analysis was conducted using a random effects model. Interval likelihood ratios (LRs) were computed when possible. To illustrate one method to quantify theoretical points in the probability of disease whereby clinicians might cease testing altogether and either withhold treatment (test threshold) or initiate definitive therapy in lieu of further diagnostics (treatment threshold), an interactive spreadsheet was designed and sample calculations were provided based on research estimates of diagnostic accuracy, diagnostic risk, and therapeutic risk/benefits. Results The prevalence of nongonococcal septic arthritis in ED patients with a single acutely painful joint is approximately 27% (95% confidence interval [CI] = 17% to 38%). With the exception of joint surgery (positive likelihood ratio [+LR] = 6.9) or skin infection overlying a prosthetic joint (+LR = 15.0), history, physical examination, and serum tests do not significantly alter posttest probability. Serum inflammatory markers such as white blood cell (WBC) counts, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) are not useful acutely. The interval LR for synovial white blood cell (sWBC) counts of 0 × 109–25 × 109/ L was 0.33; for 25 × 109–50 × 109/L, 1.06; for 50 × 109–100 × 109/L, 3.59; and exceeding 100 × 109/L, infinity. Synovial lactate may be useful to rule in or rule out the diagnosis of septic arthritis with a +LR ranging from 2.4 to infinity, and negative likelihood ratio (−LR) ranging from 0 to 0.46. Rapid polymerase chain reaction (PCR) of synovial fluid may identify the causative organism within 3 hours. Based on 56% sensitivity and 90% specificity for sWBC counts of >50 × 109/L in conjunction with best-evidence estimates for diagnosis-related risk and treatment-related risk/benefit, the arthrocentesis test threshold is 5%, with a treatment threshold of 39%. Conclusions Recent joint surgery or cellulitis overlying a prosthetic hip or knee were the only findings on history or physical examination that significantly alter the probability of nongonococcal septic arthritis. Extreme values of sWBC (>50 × 109/L) can increase, but not decrease, the probability of septic arthritis. Future ED-based diagnostic trials are needed to evaluate the role of clinical gestalt and the efficacy of nontraditional synovial markers such as lactate. PMID:21843213

  5. Evidence-based diagnostics: adult septic arthritis.

    PubMed

    Carpenter, Christopher R; Schuur, Jeremiah D; Everett, Worth W; Pines, Jesse M

    2011-08-01

    Acutely swollen or painful joints are common complaints in the emergency department (ED). Septic arthritis in adults is a challenging diagnosis, but prompt differentiation of a bacterial etiology is crucial to minimize morbidity and mortality. The objective was to perform a systematic review describing the diagnostic characteristics of history, physical examination, and bedside laboratory tests for nongonococcal septic arthritis. A secondary objective was to quantify test and treatment thresholds using derived estimates of sensitivity and specificity, as well as best-evidence diagnostic and treatment risks and anticipated benefits from appropriate therapy. Two electronic search engines (PUBMED and EMBASE) were used in conjunction with a selected bibliography and scientific abstract hand search. Inclusion criteria included adult trials of patients presenting with monoarticular complaints if they reported sufficient detail to reconstruct partial or complete 2 × 2 contingency tables for experimental diagnostic test characteristics using an acceptable criterion standard. Evidence was rated by two investigators using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS). When more than one similarly designed trial existed for a diagnostic test, meta-analysis was conducted using a random effects model. Interval likelihood ratios (LRs) were computed when possible. To illustrate one method to quantify theoretical points in the probability of disease whereby clinicians might cease testing altogether and either withhold treatment (test threshold) or initiate definitive therapy in lieu of further diagnostics (treatment threshold), an interactive spreadsheet was designed and sample calculations were provided based on research estimates of diagnostic accuracy, diagnostic risk, and therapeutic risk/benefits. The prevalence of nongonococcal septic arthritis in ED patients with a single acutely painful joint is approximately 27% (95% confidence interval [CI] = 17% to 38%). With the exception of joint surgery (positive likelihood ratio [+LR] = 6.9) or skin infection overlying a prosthetic joint (+LR = 15.0), history, physical examination, and serum tests do not significantly alter posttest probability. Serum inflammatory markers such as white blood cell (WBC) counts, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) are not useful acutely. The interval LR for synovial white blood cell (sWBC) counts of 0 × 10(9)-25 × 10(9)/L was 0.33; for 25 × 10(9)-50 × 10(9)/L, 1.06; for 50 × 10(9)-100 × 10(9)/L, 3.59; and exceeding 100 × 10(9)/L, infinity. Synovial lactate may be useful to rule in or rule out the diagnosis of septic arthritis with a +LR ranging from 2.4 to infinity, and negative likelihood ratio (-LR) ranging from 0 to 0.46. Rapid polymerase chain reaction (PCR) of synovial fluid may identify the causative organism within 3 hours. Based on 56% sensitivity and 90% specificity for sWBC counts of >50 × 10(9)/L in conjunction with best-evidence estimates for diagnosis-related risk and treatment-related risk/benefit, the arthrocentesis test threshold is 5%, with a treatment threshold of 39%. Recent joint surgery or cellulitis overlying a prosthetic hip or knee were the only findings on history or physical examination that significantly alter the probability of nongonococcal septic arthritis. Extreme values of sWBC (>50 × 10(9)/L) can increase, but not decrease, the probability of septic arthritis. Future ED-based diagnostic trials are needed to evaluate the role of clinical gestalt and the efficacy of nontraditional synovial markers such as lactate. © 2011 by the Society for Academic Emergency Medicine.

  6. Evaluation of a Consistent LES/PDF Method Using a Series of Experimental Spray Flames

    NASA Astrophysics Data System (ADS)

    Heye, Colin; Raman, Venkat

    2012-11-01

    A consistent method for the evolution of the joint-scalar probability density function (PDF) transport equation is proposed for application to large eddy simulation (LES) of turbulent reacting flows containing evaporating spray droplets. PDF transport equations provide the benefit of including the chemical source term in closed form, however, additional terms describing LES subfilter mixing must be modeled. The recent availability of detailed experimental measurements provide model validation data for a wide range of evaporation rates and combustion regimes, as is well-known to occur in spray flames. In this work, the experimental data will used to investigate the impact of droplet mass loading and evaporation rates on the subfilter scalar PDF shape in comparison with conventional flamelet models. In addition, existing model term closures in the PDF transport equations are evaluated with a focus on their validity in the presence of regime changes.

  7. [Endoprostheses in geriatric traumatology].

    PubMed

    Buecking, B; Eschbach, D; Bliemel, C; Knobe, M; Aigner, R; Ruchholtz, S

    2017-01-01

    Geriatric traumatology is increasing in importance due to the demographic transition. In cases of fractures close to large joints it is questionable whether primary joint replacement is advantageous compared to joint-preserving internal fixation. The aim of this study was to describe the importance of prosthetic joint replacement in the treatment of geriatric patients suffering from frequent periarticular fractures in comparison to osteosynthetic joint reconstruction and conservative methods. A selective search of the literature was carried out to identify studies and recommendations concerned with primary arthroplasty of fractures in the region of the various joints (hip, shoulder, elbow and knee). The importance of primary arthroplasty in geriatric traumatology differs greatly between the various joints. Implantation of a prosthesis has now become the gold standard for displaced fractures of the femoral neck. In addition, reverse shoulder arthroplasty has become an established alternative option to osteosynthesis in the treatment of complex proximal humeral fractures. Due to a lack of large studies definitive recommendations cannot yet be given for fractures around the elbow and the knee. Nowadays, joint replacement for these fractures is recommended only if reconstruction of the joint surface is not possible. The importance of primary joint replacement for geriatric fractures will probably increase in the future. Further studies with larger patient numbers must be conducted to achieve more confidence in decision making between joint replacement and internal fixation especially for shoulder, elbow and knee joints.

  8. Morphine Injection

    MedlinePlus

    ... back or joint pain; widening of the pupils; irritability; anxiety; weakness; stomach cramps; difficulty falling asleep or staying asleep; nausea; loss of appetite; vomiting; diarrhea; fast breathing; or fast heartbeat. Your doctor will probably decrease your dose gradually.

  9. Meperidine Injection

    MedlinePlus

    ... back or joint pain; widening of the pupils; irritability; anxiety; weakness; stomach cramps; difficulty falling asleep or staying asleep; nausea; loss of appetite; vomiting; diarrhea; fast breathing; or fast heartbeat. Your doctor will probably decrease your dose gradually.

  10. Glossary of Foot and Ankle Terms

    MedlinePlus

    ... or she will probably outgrow the condition naturally. Inversion - Twisting in toward the midline of the body. ... with the leg; the subtalar joint, which allows inversion and eversion of the foot with the leg; ...

  11. A joint probability approach for coincidental flood frequency analysis at ungauged basin confluences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Cheng

    2016-03-12

    A reliable and accurate flood frequency analysis at the confluence of streams is of importance. Given that long-term peak flow observations are often unavailable at tributary confluences, at a practical level, this paper presents a joint probability approach (JPA) to address the coincidental flood frequency analysis at the ungauged confluence of two streams based on the flow rate data from the upstream tributaries. One case study is performed for comparison against several traditional approaches, including the position-plotting formula, the univariate flood frequency analysis, and the National Flood Frequency Program developed by US Geological Survey. It shows that the results generatedmore » by the JPA approach agree well with the floods estimated by the plotting position and univariate flood frequency analysis based on the observation data.« less

  12. A Modelling Method of Bolt Joints Based on Basic Characteristic Parameters of Joint Surfaces

    NASA Astrophysics Data System (ADS)

    Yuansheng, Li; Guangpeng, Zhang; Zhen, Zhang; Ping, Wang

    2018-02-01

    Bolt joints are common in machine tools and have a direct impact on the overall performance of the tools. Therefore, the understanding of bolt joint characteristics is essential for improving machine design and assembly. Firstly, According to the experimental data obtained from the experiment, the stiffness curve formula was fitted. Secondly, a finite element model of unit bolt joints such as bolt flange joints, bolt head joints, and thread joints was constructed, and lastly the stiffness parameters of joint surfaces were implemented in the model by the secondary development of ABAQUS. The finite element model of the bolt joint established by this method can simulate the contact state very well.

  13. Impact of multiple joint impairments on the energetics and mechanics of walking in patients with haemophilia.

    PubMed

    Lobet, S; Detrembleur, C; Hermans, C

    2013-03-01

    Few studies have assessed the changes produced by multiple joint impairments (MJI) of the lower limbs on gait in patients with haemophilia (PWH). In patients with MJI, quantifiable outcome measures are necessary if treatment benefits are to be compared. This study was aimed at observing the metabolic cost, mechanical work and efficiency of walking among PWH with MJI and to investigate the relationship between joint damage and any changes in mechanical and energetic variables. This study used three-dimensional gait analysis to investigate the kinematics, cost, mechanical work and efficiency of walking in 31 PWH with MJI, with the results being compared with speed-matched values from a database of healthy subjects. Regarding energetics, the mass-specific net cost of transport (C(net)) was significantly higher for PWH with MJI compared with control and directly related to a loss in dynamic joint range of motion. Surprisingly, however, there was no substantial increase in mechanical work, with PWH being able to adopt a walking strategy to improve energy recovery via the pendulum mechanism. This probable compensatory mechanism to economize energy likely counterbalances the supplementary work associated with an increased vertical excursion of centre of mass (CoM) and lower muscle efficiency of locomotion. Metabolic variables were probably the most representative variables of gait disability for these subjects with complex orthopaedic degenerative disorders. © 2012 Blackwell Publishing Ltd.

  14. Dynamic analysis of clamp band joint system subjected to axial vibration

    NASA Astrophysics Data System (ADS)

    Qin, Z. Y.; Yan, S. Z.; Chu, F. L.

    2010-10-01

    Clamp band joints are commonly used for connecting circular components together in industry. Some of the systems jointed by clamp band are subjected to dynamic load. However, very little research on the dynamic characteristics for this kind of joint can be found in the literature. In this paper, a dynamic model for clamp band joint system is developed. Contact and frictional slip between the components are accommodated in this model. Nonlinear finite element analysis is conducted to identify the model parameters. Then static experiments are carried out on a scaled model of the clamp band joint to validate the joint model. Finally, the model is adopted to study the dynamic characteristics of the clamp band joint system subjected to axial harmonic excitation and the effects of the wedge angle of the clamp band joint and the preload on the response. The model proposed in this paper can represent the nonlinearity of the clamp band joint and be used conveniently to investigate the effects of the structural and loading parameters on the dynamic characteristics of this type of joint system.

  15. Non-Kolmogorovian Approach to the Context-Dependent Systems Breaking the Classical Probability Law

    NASA Astrophysics Data System (ADS)

    Asano, Masanari; Basieva, Irina; Khrennikov, Andrei; Ohya, Masanori; Yamato, Ichiro

    2013-07-01

    There exist several phenomena breaking the classical probability laws. The systems related to such phenomena are context-dependent, so that they are adaptive to other systems. In this paper, we present a new mathematical formalism to compute the joint probability distribution for two event-systems by using concepts of the adaptive dynamics and quantum information theory, e.g., quantum channels and liftings. In physics the basic example of the context-dependent phenomena is the famous double-slit experiment. Recently similar examples have been found in biological and psychological sciences. Our approach is an extension of traditional quantum probability theory, and it is general enough to describe aforementioned contextual phenomena outside of quantum physics.

  16. A likelihood framework for joint estimation of salmon abundance and migratory timing using telemetric mark-recapture

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; Gates, Kenneth S.; Palmer, Douglas E.

    2010-01-01

    Many fisheries for Pacific salmon Oncorhynchus spp. are actively managed to meet escapement goal objectives. In fisheries where the demand for surplus production is high, an extensive assessment program is needed to achieve the opposing objectives of allowing adequate escapement and fully exploiting the available surplus. Knowledge of abundance is a critical element of such assessment programs. Abundance estimation using mark—recapture experiments in combination with telemetry has become common in recent years, particularly within Alaskan river systems. Fish are typically captured and marked in the lower river while migrating in aggregations of individuals from multiple populations. Recapture data are obtained using telemetry receivers that are co-located with abundance assessment projects near spawning areas, which provide large sample sizes and information on population-specific mark rates. When recapture data are obtained from multiple populations, unequal mark rates may reflect a violation of the assumption of homogeneous capture probabilities. A common analytical strategy is to test the hypothesis that mark rates are homogeneous and combine all recapture data if the test is not significant. However, mark rates are often low, and a test of homogeneity may lack sufficient power to detect meaningful differences among populations. In addition, differences among mark rates may provide information that could be exploited during parameter estimation. We present a temporally stratified mark—recapture model that permits capture probabilities and migratory timing through the capture area to vary among strata. Abundance information obtained from a subset of populations after the populations have segregated for spawning is jointly modeled with telemetry distribution data by use of a likelihood function. Maximization of the likelihood produces estimates of the abundance and timing of individual populations migrating through the capture area, thus yielding substantially more information than the total abundance estimate provided by the conventional approach. The utility of the model is illustrated with data for coho salmon O. kisutch from the Kasilof River in south-central Alaska.

  17. Flood Damages- savings potential for Austrian municipalities and evidence of adaptation

    NASA Astrophysics Data System (ADS)

    Unterberger, C.

    2016-12-01

    Recent studies show that the number of extreme precipitation events has increased globally and will continue to do so in the future. These observations are particularly true for central, northern and north-eastern Europe. These changes in the patterns of extreme events have direct repercussions for policy makers. Rojas et al. (2013) find that until 2080, annual damages could increase by a factor of 17 (from €5,5 bn/year today to € 98 bn/year in 2080) in the event that no adaptation measures are taken. Steininger et al. (2015) find that climate and weather induced extreme events account for an annual current welfare loss of about € 1 billion in Austria. As a result, policy makers will need to understand the interaction between hazard, exposure and vulnerability, with the goal of achieving flood risk reduction. Needed is a better understanding of where exposure, vulnerability and eventually flood risk are highest, i.e. where to reduce risk first and which factors drive existing flood risk. This article analyzes direct flood losses as reported by 1153 Austrian municipalities between 2005 and 2013. To achieve comparability between flood damages and municipalities' ordinary spending, a "vulnerability threshold" is introduced suggesting that flood damages should be lower than 5% of municipalities' average annual ordinary spending. It is found that the probability that flood damages exceed this vulnerability threshold is 12%. To provide a reliable estimate for that exceedance probability the joint distribution of damages and spending is modelled by means of a copula approach. Based on the joint distribution, a Monte Carlo simulation is conducted to derive uncertainty ranges for the exceedance probability. To analyze the drivers of flood damages and the effect they have on municipalities' spending, two linear regression models are estimated. Hereby obtained results suggest that damages increase significantly for those municipalities located along the shores of the river Danube and decrease significantly for municipalities that experienced floods in the past- indicating successful adaptation. As for the relationship between flood damages and municipalities' spending, the regression results indicate that flood damages have a significant positive impact.

  18. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  19. A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins

    NASA Astrophysics Data System (ADS)

    Gronewold, A.; Alameddine, I.; Anderson, R. M.

    2009-12-01

    Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predicting flow from ungauged basins. In particular, these approaches allow for predicting flows under uncertain and potentially variable future conditions due to rapid land cover changes, variable climate conditions, and other factors. Despite the broad range of literature on estimating rainfall-runoff model parameters, however, the absence of a robust set of modeling tools for identifying and quantifying uncertainties in (and correlation between) rainfall-runoff model parameters represents a significant gap in current hydrological modeling research. Here, we build upon a series of recent publications promoting novel Bayesian and probabilistic modeling strategies for quantifying rainfall-runoff model parameter estimation uncertainty. Our approach applies alternative measures of rainfall-runoff model parameter joint likelihood (including Nash-Sutcliffe efficiency, among others) to simulate samples from the joint parameter posterior probability density function. We then use these correlated samples as response variables in a Bayesian hierarchical model with land use coverage data as predictor variables in order to develop a robust land use-based tool for forecasting flow in ungauged basins while accounting for, and explicitly acknowledging, parameter estimation uncertainty. We apply this modeling strategy to low-relief coastal watersheds of Eastern North Carolina, an area representative of coastal resource waters throughout the world because of its sensitive embayments and because of the abundant (but currently threatened) natural resources it hosts. Consequently, this area is the subject of several ongoing studies and large-scale planning initiatives, including those conducted through the United States Environmental Protection Agency (USEPA) total maximum daily load (TMDL) program, as well as those addressing coastal population dynamics and sea level rise. Our approach has several advantages, including the propagation of parameter uncertainty through a nonparametric probability distribution which avoids common pitfalls of fitting parameters and model error structure to a predetermined parametric distribution function. In addition, by explicitly acknowledging correlation between model parameters (and reflecting those correlations in our predictive model) our model yields relatively efficient prediction intervals (unlike those in the current literature which are often unnecessarily large, and may lead to overly-conservative management actions). Finally, our model helps improve understanding of the rainfall-runoff process by identifying model parameters (and associated catchment attributes) which are most sensitive to current and future land use change patterns. Disclaimer: Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.

  20. A PDF closure model for compressible turbulent chemically reacting flows

    NASA Technical Reports Server (NTRS)

    Kollmann, W.

    1992-01-01

    The objective of the proposed research project was the analysis of single point closures based on probability density function (pdf) and characteristic functions and the development of a prediction method for the joint velocity-scalar pdf in turbulent reacting flows. Turbulent flows of boundary layer type and stagnation point flows with and without chemical reactions were be calculated as principal applications. Pdf methods for compressible reacting flows were developed and tested in comparison with available experimental data. The research work carried in this project was concentrated on the closure of pdf equations for incompressible and compressible turbulent flows with and without chemical reactions.

  1. Regularized joint inverse estimation of extreme rainfall amounts in ungauged coastal basins of El Salvador

    USGS Publications Warehouse

    Friedel, M.J.

    2008-01-01

    A regularized joint inverse procedure is presented and used to estimate the magnitude of extreme rainfall events in ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. Since streamflow measurements reflect temporal and spatial rainfall information, peak-flow discharge is hypothesized to represent a similarity measure suitable for regionalization. To test this hypothesis, peak-flow discharge values determined from streamflow recurrence information (10-year, 25-year, and 100-year) collected outside the study basins are used to develop regional (country-wide) regression equations. Peak-flow discharge derived from these equations together with preferred spatial parameter relations as soft prior information are used to constrain the simultaneous calibration of 20 tributary basin models. The nonlinear range of uncertainty in estimated parameter values (1 curve number and 3 recurrent rainfall amounts for each model) is determined using an inverse calibration-constrained Monte Carlo approach. Cumulative probability distributions for rainfall amounts indicate differences among basins for a given return period and an increase in magnitude and range among basins with increasing return interval. Comparison of the estimated median rainfall amounts for all return periods were reasonable but larger (3.2-26%) than rainfall estimates computed using the frequency-duration (traditional) approach and individual rain gauge data. The observed 25-year recurrence rainfall amount at La Hachadura in the Paz River basin during Hurricane Mitch (1998) is similar in value to, but outside and slightly less than, the estimated rainfall confidence limits. The similarity in joint inverse and traditionally computed rainfall events, however, suggests that the rainfall observation may likely be due to under-catch and not model bias. ?? Springer Science+Business Media B.V. 2007.

  2. Performance analysis of simultaneous dense coding protocol under decoherence

    NASA Astrophysics Data System (ADS)

    Huang, Zhiming; Zhang, Cai; Situ, Haozhen

    2017-09-01

    The simultaneous dense coding (SDC) protocol is useful in designing quantum protocols. We analyze the performance of the SDC protocol under the influence of noisy quantum channels. Six kinds of paradigmatic Markovian noise along with one kind of non-Markovian noise are considered. The joint success probability of both receivers and the success probabilities of one receiver are calculated for three different locking operators. Some interesting properties have been found, such as invariance and symmetry. Among the three locking operators we consider, the SWAP gate is most resistant to noise and results in the same success probabilities for both receivers.

  3. Predicting tibiotalar and subtalar joint angles from skin-marker data with dual-fluoroscopy as a reference standard.

    PubMed

    Nichols, Jennifer A; Roach, Koren E; Fiorentino, Niccolo M; Anderson, Andrew E

    2016-09-01

    Evidence suggests that the tibiotalar and subtalar joints provide near six degree-of-freedom (DOF) motion. Yet, kinematic models frequently assume one DOF at each of these joints. In this study, we quantified the accuracy of kinematic models to predict joint angles at the tibiotalar and subtalar joints from skin-marker data. Models included 1 or 3 DOF at each joint. Ten asymptomatic subjects, screened for deformities, performed 1.0m/s treadmill walking and a balanced, single-leg heel-rise. Tibiotalar and subtalar joint angles calculated by inverse kinematics for the 1 and 3 DOF models were compared to those measured directly in vivo using dual-fluoroscopy. Results demonstrated that, for each activity, the average error in tibiotalar joint angles predicted by the 1 DOF model were significantly smaller than those predicted by the 3 DOF model for inversion/eversion and internal/external rotation. In contrast, neither model consistently demonstrated smaller errors when predicting subtalar joint angles. Additionally, neither model could accurately predict discrete angles for the tibiotalar and subtalar joints on a per-subject basis. Differences between model predictions and dual-fluoroscopy measurements were highly variable across subjects, with joint angle errors in at least one rotation direction surpassing 10° for 9 out of 10 subjects. Our results suggest that both the 1 and 3 DOF models can predict trends in tibiotalar joint angles on a limited basis. However, as currently implemented, neither model can predict discrete tibiotalar or subtalar joint angles for individual subjects. Inclusion of subject-specific attributes may improve the accuracy of these models. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Flight 20 (STS-45) polysulfide gas path investigation

    NASA Technical Reports Server (NTRS)

    Bjorkman, Rey C.; Bown, Charles W.; Smith, Scott D.; Walters, Jerry L.; Kulkarni, Suresh B.; Cook, Roger V.; Sebahar, David A.; Walker, Craig S.; Haddock, M. Reed; Lindstrom, Robert E.

    1992-01-01

    This report documents the results of the investigation into causes of gas paths on the 20A and 20B case-to-nozzle joints on STS-42. The investigation was conducted by the Investigation Board appointed by the senior vice president and general manager of Space Operations, Mr. R. E. Lindstrom, on 7 Feb. 1992. The probability of gas path occurrence in the nozzle-to-case-joint polysulfide had been identified during joint redesign. However, actual flight gas path incidence has been limited to RSRM-11 and the 20A and 20B segments. The blow-by condition on the 20A segment was a first time occurrence which was a special concern. The investigation covered all technical aspects associated with the gas path and blow-by conditions: materials and processing history, design requirements and as-built compliance to the design, thermal and structural analyses, computer modeling, and laboratory experimentation with the materials involved. The investigation was coordinated with Mr. Ken Jones at NASA Marshall in bi-weekly teleconferences. The Board also supported Dr. James C. Blair's independent NASA investigation team by providing copies of collected data, conducting requested analyses, and supporting several all-day teleconferences to provide understanding and resolve issues. The Dr. Blair support requirement was successfully concluded on 4 Mar. 1992.

  5. Antioxidant Effect of Spirulina (Arthrospira) maxima on Chronic Inflammation Induced by Freund's Complete Adjuvant in Rats

    PubMed Central

    Gutiérrez-Rebolledo, Gabriel Alfonso; Galar-Martínez, Marcela; García-Rodríguez, Rosa Virginia; Chamorro-Cevallos, Germán A.; Hernández-Reyes, Ana Gabriela

    2015-01-01

    Abstract One of the major mechanisms in the pathogenesis of chronic inflammation is the excessive production of reactive oxygen and reactive nitrogen species, and therefore, oxidative stress. Spirulina (Arthrospira) maxima has marked antioxidant activity in vivo and in vitro, as well as anti-inflammatory activity in certain experimental models, the latter activity being mediated probably by the antioxidant activity of this cyanobacterium. In the present study, chronic inflammation was induced through injection of Freund's complete adjuvant (CFA) in rats treated daily with Spirulina (Arthrospira) maxima for 2 weeks beginning on day 14. Joint diameter, body temperature, and motor capacity were assessed each week. On days 0 and 28, total and differential leukocyte counts and serum oxidative damage were determined, the latter by assessing lipid peroxidation and protein carbonyl content. At the end of the study, oxidative damage to joints was likewise evaluated. Results show that S. maxima favors increased mobility, as well as body temperature regulation, and a number of circulating leukocytes, lymphocytes, and monocytes in specimens with CFA-induced chronic inflammation and also protects against oxidative damage in joint tissue as well as serum. In conclusion, the protection afforded by S. maxima against development of chronic inflammation is due to its antioxidant activity. PMID:25599112

  6. Statistical Analysis of Stress Signals from Bridge Monitoring by FBG System.

    PubMed

    Ye, Xiao-Wei; Su, You-Hua; Xi, Pei-Sen

    2018-02-07

    In this paper, a fiber Bragg grating (FBG)-based stress monitoring system instrumented on an orthotropic steel deck arch bridge is demonstrated. The FBG sensors are installed at two types of critical fatigue-prone welded joints to measure the strain and temperature signals. A total of 64 FBG sensors are deployed around the rib-to-deck and rib-to-diagram areas at the mid-span and quarter-span of the investigated orthotropic steel bridge. The local stress behaviors caused by the highway loading and temperature effect during the construction and operation periods are presented with the aid of a wavelet multi-resolution analysis approach. In addition, the multi-modal characteristic of the rainflow counted stress spectrum is modeled by the method of finite mixture distribution together with a genetic algorithm (GA)-based parameter estimation approach. The optimal probability distribution of the stress spectrum is determined by use of Bayesian information criterion (BIC). Furthermore, the hot spot stress of the welded joint is calculated by an extrapolation method recommended in the specification of International Institute of Welding (IIW). The stochastic characteristic of stress concentration factor (SCF) of the concerned welded joint is addressed. The proposed FBG-based stress monitoring system and probabilistic stress evaluation methods can provide an effective tool for structural monitoring and condition assessment of orthotropic steel bridges.

  7. Antioxidant Effect of Spirulina (Arthrospira) maxima on Chronic Inflammation Induced by Freund's Complete Adjuvant in Rats.

    PubMed

    Gutiérrez-Rebolledo, Gabriel Alfonso; Galar-Martínez, Marcela; García-Rodríguez, Rosa Virginia; Chamorro-Cevallos, Germán A; Hernández-Reyes, Ana Gabriela; Martínez-Galero, Elizdath

    2015-08-01

    One of the major mechanisms in the pathogenesis of chronic inflammation is the excessive production of reactive oxygen and reactive nitrogen species, and therefore, oxidative stress. Spirulina (Arthrospira) maxima has marked antioxidant activity in vivo and in vitro, as well as anti-inflammatory activity in certain experimental models, the latter activity being mediated probably by the antioxidant activity of this cyanobacterium. In the present study, chronic inflammation was induced through injection of Freund's complete adjuvant (CFA) in rats treated daily with Spirulina (Arthrospira) maxima for 2 weeks beginning on day 14. Joint diameter, body temperature, and motor capacity were assessed each week. On days 0 and 28, total and differential leukocyte counts and serum oxidative damage were determined, the latter by assessing lipid peroxidation and protein carbonyl content. At the end of the study, oxidative damage to joints was likewise evaluated. Results show that S. maxima favors increased mobility, as well as body temperature regulation, and a number of circulating leukocytes, lymphocytes, and monocytes in specimens with CFA-induced chronic inflammation and also protects against oxidative damage in joint tissue as well as serum. In conclusion, the protection afforded by S. maxima against development of chronic inflammation is due to its antioxidant activity.

  8. Experimental measurement and modeling analysis on mechanical properties of incudostapedial joint

    PubMed Central

    Zhang, Xiangming

    2011-01-01

    The incudostapedial (IS) joint between the incus and stapes is a synovial joint consisting of joint capsule, cartilage, and synovial fluid. The mechanical properties of the IS joint directly affect the middle ear transfer function for sound transmission. However, due to the complexity and small size of the joint, the mechanical properties of the IS joint have not been reported in the literature. In this paper, we report our current study on mechanical properties of human IS joint using both experimental measurement and finite element (FE) modeling analysis. Eight IS joint samples with the incus and stapes attached were harvested from human cadaver temporal bones. Tension, compression, stress relaxation and failure tests were performed on those samples in a micro-material testing system. An analytical approach with the hyperelastic Ogden model and a 3D FE model of the IS joint including the cartilage, joint capsule, and synovial fluid were employed to derive mechanical parameters of the IS joint. The comparison of measurements and modeling results reveals the relationship between the mechanical properties and structure of the IS joint. PMID:21061141

  9. Experimental measurement and modeling analysis on mechanical properties of incudostapedial joint.

    PubMed

    Zhang, Xiangming; Gan, Rong Z

    2011-10-01

    The incudostapedial (IS) joint between the incus and stapes is a synovial joint consisting of joint capsule, cartilage, and synovial fluid. The mechanical properties of the IS joint directly affect the middle ear transfer function for sound transmission. However, due to the complexity and small size of the joint, the mechanical properties of the IS joint have not been reported in the literature. In this paper, we report our current study on mechanical properties of human IS joint using both experimental measurement and finite element (FE) modeling analysis. Eight IS joint samples with the incus and stapes attached were harvested from human cadaver temporal bones. Tension, compression, stress relaxation and failure tests were performed on those samples in a micro-material testing system. An analytical approach with the hyperelastic Ogden model and a 3D FE model of the IS joint including the cartilage, joint capsule, and synovial fluid were employed to derive mechanical parameters of the IS joint. The comparison of measurements and modeling results reveals the relationship between the mechanical properties and structure of the IS joint.

  10. A spray flamelet/progress variable approach combined with a transported joint PDF model for turbulent spray flames

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Olguin, Hernan; Gutheil, Eva

    2017-05-01

    A spray flamelet/progress variable approach is developed for use in spray combustion with partly pre-vaporised liquid fuel, where a laminar spray flamelet library accounts for evaporation within the laminar flame structures. For this purpose, the standard spray flamelet formulation for pure evaporating liquid fuel and oxidiser is extended by a chemical reaction progress variable in both the turbulent spray flame model and the laminar spray flame structures, in order to account for the effect of pre-vaporised liquid fuel for instance through use of a pilot flame. This new approach is combined with a transported joint probability density function (PDF) method for the simulation of a turbulent piloted ethanol/air spray flame, and the extension requires the formulation of a joint three-variate PDF depending on the gas phase mixture fraction, the chemical reaction progress variable, and gas enthalpy. The molecular mixing is modelled with the extended interaction-by-exchange-with-the-mean (IEM) model, where source terms account for spray evaporation and heat exchange due to evaporation as well as the chemical reaction rate for the chemical reaction progress variable. This is the first formulation using a spray flamelet model considering both evaporation and partly pre-vaporised liquid fuel within the laminar spray flamelets. Results with this new formulation show good agreement with the experimental data provided by A.R. Masri, Sydney, Australia. The analysis of the Lagrangian statistics of the gas temperature and the OH mass fraction indicates that partially premixed combustion prevails near the nozzle exit of the spray, whereas further downstream, the non-premixed flame is promoted towards the inner rich-side of the spray jet since the pilot flame heats up the premixed inner spray zone. In summary, the simulation with the new formulation considering the reaction progress variable shows good performance, greatly improving the standard formulation, and it provides new insight into the local structure of this complex spray flame.

  11. The pdf approach to turbulent polydispersed two-phase flows

    NASA Astrophysics Data System (ADS)

    Minier, Jean-Pierre; Peirano, Eric

    2001-10-01

    The purpose of this paper is to develop a probabilistic approach to turbulent polydispersed two-phase flows. The two-phase flows considered are composed of a continuous phase, which is a turbulent fluid, and a dispersed phase, which represents an ensemble of discrete particles (solid particles, droplets or bubbles). Gathering the difficulties of turbulent flows and of particle motion, the challenge is to work out a general modelling approach that meets three requirements: to treat accurately the physically relevant phenomena, to provide enough information to address issues of complex physics (combustion, polydispersed particle flows, …) and to remain tractable for general non-homogeneous flows. The present probabilistic approach models the statistical dynamics of the system and consists in simulating the joint probability density function (pdf) of a number of fluid and discrete particle properties. A new point is that both the fluid and the particles are included in the pdf description. The derivation of the joint pdf model for the fluid and for the discrete particles is worked out in several steps. The mathematical properties of stochastic processes are first recalled. The various hierarchies of pdf descriptions are detailed and the physical principles that are used in the construction of the models are explained. The Lagrangian one-particle probabilistic description is developed first for the fluid alone, then for the discrete particles and finally for the joint fluid and particle turbulent systems. In the case of the probabilistic description for the fluid alone or for the discrete particles alone, numerical computations are presented and discussed to illustrate how the method works in practice and the kind of information that can be extracted from it. Comments on the current modelling state and propositions for future investigations which try to link the present work with other ideas in physics are made at the end of the paper.

  12. Probability density function evolution of power systems subject to stochastic variation of renewable energy

    NASA Astrophysics Data System (ADS)

    Wei, J. Q.; Cong, Y. C.; Xiao, M. Q.

    2018-05-01

    As renewable energies are increasingly integrated into power systems, there is increasing interest in stochastic analysis of power systems.Better techniques should be developed to account for the uncertainty caused by penetration of renewables and consequently analyse its impacts on stochastic stability of power systems. In this paper, the Stochastic Differential Equations (SDEs) are used to represent the evolutionary behaviour of the power systems. The stationary Probability Density Function (PDF) solution to SDEs modelling power systems excited by Gaussian white noise is analysed. Subjected to such random excitation, the Joint Probability Density Function (JPDF) solution to the phase angle and angular velocity is governed by the generalized Fokker-Planck-Kolmogorov (FPK) equation. To solve this equation, the numerical method is adopted. Special measure is taken such that the generalized FPK equation is satisfied in the average sense of integration with the assumed PDF. Both weak and strong intensities of the stochastic excitations are considered in a single machine infinite bus power system. The numerical analysis has the same result as the one given by the Monte Carlo simulation. Potential studies on stochastic behaviour of multi-machine power systems with random excitations are discussed at the end.

  13. Recent advances in computational mechanics of the human knee joint.

    PubMed

    Kazemi, M; Dabiri, Y; Li, L P

    2013-01-01

    Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling.

  14. Recent Advances in Computational Mechanics of the Human Knee Joint

    PubMed Central

    Kazemi, M.; Dabiri, Y.; Li, L. P.

    2013-01-01

    Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling. PMID:23509602

  15. Preliminary results of fisheries investigation associated with Skylab-3

    NASA Technical Reports Server (NTRS)

    Savastano, K.; Pastula, E., Jr.; Woods, G.; Faller, K.

    1974-01-01

    The purpose of the 15-month investigation now in the analysis phase is to establish the feasibility of utilizing remotely sensed data acquired from aircraft and satellite platforms to provide information concerning the distribution and abundance of oceanic gamefish. Data from the test area, jointly acquired by private and professional fishermen and NASA and NOAA/NMFS elements, in the northeastern Gulf of Mexico has made possible the identification of significant environmental parameters for white marlin. Predictive models based on catch data and surface truth information have been developed and have demonstrated potential for reducing search significantly by identifying areas which have a high probability of being productive. Three of the parameters utilized by the model, chlorophyll-a, sea surface temperature and turbidity have been inferred from aircraft sensor data.

  16. Impact of meteorological inflow uncertainty on tracer transport and source estimation in urban atmospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, Donald D.; Gowardhan, Akshay; Cameron-Smith, Philip

    2015-08-08

    Here, a computational Bayesian inverse technique is used to quantify the effects of meteorological inflow uncertainty on tracer transport and source estimation in a complex urban environment. We estimate a probability distribution of meteorological inflow by comparing wind observations to Monte Carlo simulations from the Aeolus model. Aeolus is a computational fluid dynamics model that simulates atmospheric and tracer flow around buildings and structures at meter-scale resolution. Uncertainty in the inflow is propagated through forward and backward Lagrangian dispersion calculations to determine the impact on tracer transport and the ability to estimate the release location of an unknown source. Ourmore » uncertainty methods are compared against measurements from an intensive observation period during the Joint Urban 2003 tracer release experiment conducted in Oklahoma City.« less

  17. Do Financial Incentives Influence GPs' Decisions to Do After-hours Work? A Discrete Choice Labour Supply Model.

    PubMed

    Broadway, Barbara; Kalb, Guyonne; Li, Jinhu; Scott, Anthony

    2017-12-01

    This paper analyses doctors' supply of after-hours care (AHC), and how it is affected by personal and family circumstances as well as the earnings structure. We use detailed survey data from a large sample of Australian General Practitioners (GPs) to estimate a structural, discrete choice model of labour supply and AHC. This allows us to jointly model GPs' decisions on the number of daytime-weekday working hours and the probability of providing AHC. We simulate GPs' labour supply responses to an increase in hourly earnings, both in a daytime-weekday setting and for AHC. GPs increase their daytime-weekday working hours if their hourly earnings in this setting increase, but only to a very small extent. GPs are somewhat more likely to provide AHC if their hourly earnings in that setting increase, but again, the effect is very small and only evident in some subgroups. Moreover, higher earnings in weekday-daytime practice reduce the probability of providing AHC, particularly for men. Increasing GPs' earnings appears to be at best relatively ineffective in encouraging increased provision of AHC and may even prove harmful if incentives are not well targeted. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. A theoretical basis for the analysis of redundant software subject to coincident errors

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques fault-tolerant software, is an understanding of the impact of multiple-joint occurrences of coincident errors. A theoretical basis for the study of redundant software is developed which provides a probabilistic framework for empirically evaluating the effectiveness of the general (N-Version) strategy when component versions are subject to coincident errors, and permits an analytical study of the effects of these errors. The basic assumptions of the model are: (1) independently designed software components are chosen in a random sample; and (2) in the user environment, the system is required to execute on a stationary input series. The intensity of coincident errors, has a central role in the model. This function describes the propensity to introduce design faults in such a way that software components fail together when executing in the user environment. The model is used to give conditions under which an N-Version system is a better strategy for reducing system failure probability than relying on a single version of software. A condition which limits the effectiveness of a fault-tolerant strategy is studied, and it is posted whether system failure probability varies monotonically with increasing N or whether an optimal choice of N exists.

  19. An empirical probability density distribution of planetary ionosphere storms with geomagnetic precursors

    NASA Astrophysics Data System (ADS)

    Gulyaeva, Tamara; Stanislawska, Iwona; Arikan, Feza; Arikan, Orhan

    The probability of occurrence of the positive and negative planetary ionosphere storms is evaluated using the W index maps produced from Global Ionospheric Maps of Total Electron Content, GIM-TEC, provided by Jet Propulsion Laboratory, and transformed from geographic coordinates to magnetic coordinates frame. The auroral electrojet AE index and the equatorial disturbance storm time Dst index are investigated as precursors of the global ionosphere storm. The superposed epoch analysis is performed for 77 intense storms (Dst≤-100 nT) and 227 moderate storms (-100

  20. Hot news recommendation system from heterogeneous websites based on bayesian model.

    PubMed

    Xia, Zhengyou; Xu, Shengwu; Liu, Ningzhong; Zhao, Zhengkang

    2014-01-01

    The most current news recommendations are suitable for news which comes from a single news website, not for news from different heterogeneous news websites. Previous researches about news recommender systems based on different strategies have been proposed to provide news personalization services for online news readers. However, little research work has been reported on utilizing hundreds of heterogeneous news websites to provide top hot news services for group customers (e.g., government staffs). In this paper, we propose a hot news recommendation model based on Bayesian model, which is from hundreds of different news websites. In the model, we determine whether the news is hot news by calculating the joint probability of the news. We evaluate and compare our proposed recommendation model with the results of human experts on the real data sets. Experimental results demonstrate the reliability and effectiveness of our method. We also implement this model in hot news recommendation system of Hangzhou city government in year 2013, which achieves very good results.

  1. Hot News Recommendation System from Heterogeneous Websites Based on Bayesian Model

    PubMed Central

    Xia, Zhengyou; Xu, Shengwu; Liu, Ningzhong; Zhao, Zhengkang

    2014-01-01

    The most current news recommendations are suitable for news which comes from a single news website, not for news from different heterogeneous news websites. Previous researches about news recommender systems based on different strategies have been proposed to provide news personalization services for online news readers. However, little research work has been reported on utilizing hundreds of heterogeneous news websites to provide top hot news services for group customers (e.g., government staffs). In this paper, we propose a hot news recommendation model based on Bayesian model, which is from hundreds of different news websites. In the model, we determine whether the news is hot news by calculating the joint probability of the news. We evaluate and compare our proposed recommendation model with the results of human experts on the real data sets. Experimental results demonstrate the reliability and effectiveness of our method. We also implement this model in hot news recommendation system of Hangzhou city government in year 2013, which achieves very good results. PMID:25093207

  2. Comments on {open_quotes}interpretations of quantum mechanics, joint measurement of incompatible observables, and counterfactual definiteness{close_quotes}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stapp, H.P.

    1994-12-01

    Some seeming logical deficiencies in a recent paper are described. The author responds to the arguments of the work by de Muynck, De Baere, and Martens (MDM), who argue it is widely accepted today that some sort of nonlocal effect is needed to resolve the problems raised by the works of Einstein, Podolsky, and Rosen (EPR) and John Bell. In MBM a variety of arguments are set forth that aim to invalidate the existing purported proofs of nonlocality and to provide, moreover, a local solution to the problems uncovered by EPR and Bell. Much of the argumentation in MBM ismore » based on the idea of introducing `nonideal` measurements, which, according to MBM, allow one to construct joint probability distributions for incompatible observables. The existence of a bona fide joint probability distribution for the incompatible observables occurring in the EPRB experiments would entail that Bell`s inequalities can be satisfied, and hence that the mathematical basis for the nonlocal effects would disappear. This relult would apparently allow one to eliminate the need for nonlocal effects by considering experiments of this new kind.« less

  3. Proximal interphalangeal joint ankylosis in an early medieval horse from Wrocław Cathedral Island, Poland.

    PubMed

    Janeczek, Maciej; Chrószcz, Aleksander; Onar, Vedat; Henklewski, Radomir; Skalec, Aleksandra

    2017-06-01

    Animal remains that are unearthed during archaeological excavations often provide useful information about socio-cultural context, including human habits, beliefs, and ancestral relationships. In this report, we present pathologically altered equine first and second phalanges from an 11th century specimen that was excavated at Wrocław Cathedral Island, Poland. The results of gross examination, radiography, and computed tomography, indicate osteoarthritis of the proximal interphalangeal joint, with partial ankylosis. Based on comparison with living modern horses undergoing lameness examination, as well as with recent literature, we conclude that the horse likely was lame for at least several months prior to death. The ability of this horse to work probably was reduced, but the degree of compromise during life cannot be stated precisely. Present day medical knowledge indicates that there was little likelihood of successful treatment for this condition during the middle ages. However, modern horses with similar pathology can function reasonably well with appropriate treatment and management, particularly following joint ankylosis. Thus, we approach the cultural question of why such an individual would have been maintained with limitations, for a probably-significant period of time. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Microbial Performance of Food Safety Control and Assurance Activities in a Fresh Produce Processing Sector Measured Using a Microbial Assessment Scheme and Statistical Modeling.

    PubMed

    Njage, Patrick Murigu Kamau; Sawe, Chemutai Tonui; Onyango, Cecilia Moraa; Habib, I; Njagi, Edmund Njeru; Aerts, Marc; Molenberghs, Geert

    2017-01-01

    Current approaches such as inspections, audits, and end product testing cannot detect the distribution and dynamics of microbial contamination. Despite the implementation of current food safety management systems, foodborne outbreaks linked to fresh produce continue to be reported. A microbial assessment scheme and statistical modeling were used to systematically assess the microbial performance of core control and assurance activities in five Kenyan fresh produce processing and export companies. Generalized linear mixed models and correlated random-effects joint models for multivariate clustered data followed by empirical Bayes estimates enabled the analysis of the probability of contamination across critical sampling locations (CSLs) and factories as a random effect. Salmonella spp. and Listeria monocytogenes were not detected in the final products. However, none of the processors attained the maximum safety level for environmental samples. Escherichia coli was detected in five of the six CSLs, including the final product. Among the processing-environment samples, the hand or glove swabs of personnel revealed a higher level of predicted contamination with E. coli , and 80% of the factories were E. coli positive at this CSL. End products showed higher predicted probabilities of having the lowest level of food safety compared with raw materials. The final products were E. coli positive despite the raw materials being E. coli negative for 60% of the processors. There was a higher probability of contamination with coliforms in water at the inlet than in the final rinse water. Four (80%) of the five assessed processors had poor to unacceptable counts of Enterobacteriaceae on processing surfaces. Personnel-, equipment-, and product-related hygiene measures to improve the performance of preventive and intervention measures are recommended.

  5. A quantum probability framework for human probabilistic inference.

    PubMed

    Trueblood, Jennifer S; Yearsley, James M; Pothos, Emmanuel M

    2017-09-01

    There is considerable variety in human inference (e.g., a doctor inferring the presence of a disease, a juror inferring the guilt of a defendant, or someone inferring future weight loss based on diet and exercise). As such, people display a wide range of behaviors when making inference judgments. Sometimes, people's judgments appear Bayesian (i.e., normative), but in other cases, judgments deviate from the normative prescription of classical probability theory. How can we combine both Bayesian and non-Bayesian influences in a principled way? We propose a unified explanation of human inference using quantum probability theory. In our approach, we postulate a hierarchy of mental representations, from 'fully' quantum to 'fully' classical, which could be adopted in different situations. In our hierarchy of models, moving from the lowest level to the highest involves changing assumptions about compatibility (i.e., how joint events are represented). Using results from 3 experiments, we show that our modeling approach explains 5 key phenomena in human inference including order effects, reciprocity (i.e., the inverse fallacy), memorylessness, violations of the Markov condition, and antidiscounting. As far as we are aware, no existing theory or model can explain all 5 phenomena. We also explore transitions in our hierarchy, examining how representations change from more quantum to more classical. We show that classical representations provide a better account of data as individuals gain familiarity with a task. We also show that representations vary between individuals, in a way that relates to a simple measure of cognitive style, the Cognitive Reflection Test. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Development of a clinical prediction model to calculate patient life expectancy: the measure of actuarial life expectancy (MALE).

    PubMed

    Clarke, M G; Kennedy, K P; MacDonagh, R P

    2009-01-01

    To develop a clinical prediction model enabling the calculation of an individual patient's life expectancy (LE) and survival probability based on age, sex, and comorbidity for use in the joint decision-making process regarding medical treatment. A computer software program was developed with a team of 3 clinicians, 2 professional actuaries, and 2 professional computer programmers. This incorporated statistical spreadsheet and database access design methods. Data sources included life insurance industry actuarial rating factor tables (public and private domain), Government Actuary Department UK life tables, professional actuarial sources, and evidence-based medical literature. The main outcome measures were numerical and graphical display of comorbidity-adjusted LE; 5-, 10-, and 15-year survival probability; in addition to generic UK population LE. Nineteen medical conditions, which impacted significantly on LE in actuarial terms and were commonly encountered in clinical practice, were incorporated in the final model. Numerical and graphical representations of statistical predictions of LE and survival probability were successfully generated for patients with either no comorbidity or a combination of the 19 medical conditions included. Validation and testing, including actuarial peer review, confirmed consistency with the data sources utilized. The evidence-based actuarial data utilized in this computer program design represent a valuable resource for use in the clinical decision-making process, where an accurate objective assessment of patient LE can so often make the difference between patients being offered or denied medical and surgical treatment. Ongoing development to incorporate additional comorbidities and enable Web-based access will enhance its use further.

  7. ADS-B within a Multi-Aircraft Simulation for Distributed Air-Ground Traffic Management

    NASA Technical Reports Server (NTRS)

    Barhydt, Richard; Palmer, Michael T.; Chung, William W.; Loveness, Ghyrn W.

    2004-01-01

    Automatic Dependent Surveillance Broadcast (ADS-B) is an enabling technology for NASA s Distributed Air-Ground Traffic Management (DAG-TM) concept. DAG-TM has the goal of significantly increasing capacity within the National Airspace System, while maintaining or improving safety. Under DAG-TM, aircraft exchange state and intent information over ADS-B with other aircraft and ground stations. This information supports various surveillance functions including conflict detection and resolution, scheduling, and conformance monitoring. To conduct more rigorous concept feasibility studies, NASA Langley Research Center s PC-based Air Traffic Operations Simulation models a 1090 MHz ADS-B communication structure, based on industry standards for message content, range, and reception probability. The current ADS-B model reflects a mature operating environment and message interference effects are limited to Mode S transponder replies and ADS-B squitters. This model was recently evaluated in a Joint DAG-TM Air/Ground Coordination Experiment with NASA Ames Research Center. Message probability of reception vs. range was lower at higher traffic levels. The highest message collision probability occurred near the meter fix serving as the confluence for two arrival streams. Even the highest traffic level encountered in the experiment was significantly less than the industry standard "LA Basin 2020" scenario. Future studies will account for Mode A and C message interference (a major effect in several industry studies) and will include Mode A and C aircraft in the simulation, thereby increasing the total traffic level. These changes will support ongoing enhancements to separation assurance functions that focus on accommodating longer ADS-B information update intervals.

  8. Lessons from a non-domestic canid: joint disease in captive raccoon dogs (Nyctereutes procyonoides).

    PubMed

    Lawler, Dennis F; Evans, Richard H; Nieminen, Petteri; Mustonen, Anne-Mari; Smith, Gail K

    2012-01-01

    The purpose of this study was to describe pathological changes of the shoulder, elbow, hip and stifle joints of 16 museum skeletons of the raccoon dog (Nyctereutes procyonoides). The subjects had been held in long-term captivity and were probably used for fur farming or research, thus allowing sufficient longevity for joint disease to become recognisable. The prevalence of disorders that include osteochondrosis, osteoarthritis and changes compatible with hip dysplasia, was surprisingly high. Other changes that reflect near-normal or mild pathological conditions, including prominent articular margins and mild bony periarticular rim, were also prevalent. Our data form a basis for comparing joint pathology of captive raccoon dogs with other mammals and also suggest that contributing roles of captivity and genetic predisposition should be explored further in non-domestic canids.

  9. Improved decryption quality and security of a joint transform correlator-based encryption system

    NASA Astrophysics Data System (ADS)

    Vilardy, Juan M.; Millán, María S.; Pérez-Cabré, Elisabet

    2013-02-01

    Some image encryption systems based on modified double random phase encoding and joint transform correlator architecture produce low quality decrypted images and are vulnerable to a variety of attacks. In this work, we analyse the algorithm of some reported methods that optically implement the double random phase encryption in a joint transform correlator. We show that it is possible to significantly improve the quality of the decrypted image by introducing a simple nonlinear operation in the encrypted function that contains the joint power spectrum. This nonlinearity also makes the system more resistant to chosen-plaintext attacks. We additionally explore the system resistance against this type of attack when a variety of probability density functions are used to generate the two random phase masks of the encryption-decryption process. Numerical results are presented and discussed.

  10. Joint analysis of air pollution in street canyons in St. Petersburg and Copenhagen

    NASA Astrophysics Data System (ADS)

    Genikhovich, E. L.; Ziv, A. D.; Iakovleva, E. A.; Palmgren, F.; Berkowicz, R.

    The bi-annual data set of concentrations of several traffic-related air pollutants, measured continuously in street canyons in St. Petersburg and Copenhagen, is analysed jointly using different statistical techniques. Annual mean concentrations of NO 2, NO x and, especially, benzene are found systematically higher in St. Petersburg than in Copenhagen but for ozone the situation is opposite. In both cities probability distribution functions (PDFs) of concentrations and their daily or weekly extrema are fitted with the Weibull and double exponential distributions, respectively. Sample estimates of bi-variate distributions of concentrations, concentration roses, and probabilities of concentration of one pollutant being extreme given that another one reaches its extremum are presented in this paper as well as auto- and co-spectra. It is demonstrated that there is a reasonably high correlation between seasonally averaged concentrations of pollutants in St. Petersburg and Copenhagen.

  11. Probabilistic DHP adaptive critic for nonlinear stochastic control systems.

    PubMed

    Herzallah, Randa

    2013-06-01

    Following the recently developed algorithms for fully probabilistic control design for general dynamic stochastic systems (Herzallah & Káarnáy, 2011; Kárný, 1996), this paper presents the solution to the probabilistic dual heuristic programming (DHP) adaptive critic method (Herzallah & Káarnáy, 2011) and randomized control algorithm for stochastic nonlinear dynamical systems. The purpose of the randomized control input design is to make the joint probability density function of the closed loop system as close as possible to a predetermined ideal joint probability density function. This paper completes the previous work (Herzallah & Káarnáy, 2011; Kárný, 1996) by formulating and solving the fully probabilistic control design problem on the more general case of nonlinear stochastic discrete time systems. A simulated example is used to demonstrate the use of the algorithm and encouraging results have been obtained. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Generalized monogamy of contextual inequalities from the no-disturbance principle.

    PubMed

    Ramanathan, Ravishankar; Soeda, Akihito; Kurzyński, Paweł; Kaszlikowski, Dagomir

    2012-08-03

    In this Letter, we demonstrate that the property of monogamy of Bell violations seen for no-signaling correlations in composite systems can be generalized to the monogamy of contextuality in single systems obeying the Gleason property of no disturbance. We show how one can construct monogamies for contextual inequalities by using the graph-theoretic technique of vertex decomposition of a graph representing a set of measurements into subgraphs of suitable independence numbers that themselves admit a joint probability distribution. After establishing that all the subgraphs that are chordal graphs admit a joint probability distribution, we formulate a precise graph-theoretic condition that gives rise to the monogamy of contextuality. We also show how such monogamies arise within quantum theory for a single four-dimensional system and interpret violation of these relations in terms of a violation of causality. These monogamies can be tested with current experimental techniques.

  13. Crustal and uppermost mantle structure in the Middle East: assessing constraints provided by jointly modelling Ps and Sp receiver functions and Rayleigh wave group velocity dispersion curves

    NASA Astrophysics Data System (ADS)

    Agrawal, Mohit; Pulliam, Jay; Sen, Mrinal K.; Dutta, Utpal; Pasyanos, Michael E.; Mellors, Robert

    2015-05-01

    Seismic velocity models are found, along with uncertainty estimates, for 11 sites in the Middle East by jointly modelling Ps and Sp receiver functions and surface (Rayleigh) wave group velocity dispersion. The approach performs a search for models that satisfy goodness-of-fit criteria guided by a variant of simulated annealing and uses statistical tools to assess these products of searches. These tools, a parameter correlation matrix and marginal posterior probability density (PPD) function, allow us to evaluate quantitatively the constraints that each data type imposes on model parameters and to identify portions of each model that are well-constrained relative to other portions. This joint modelling technique, which we call `multi-objective optimization for seismology', does not require a good starting solution, although such a model can be incorporated easily, if available, and can reduce the computation time significantly. Applying the process described above to broadband seismic data reveals that crustal thickness varies from 15 km beneath Djibouti (station ATD) to 45 km beneath Saudi Arabia (station RAYN). A pronounced low velocity zone for both Vp and Vs is present at a depth of ˜12 km beneath station KIV located in northern part of greater Caucasus, which may be due to the presence of a relatively young volcano. Similarly, we also noticed a 6-km-thick low velocity zone for Vp beginning at 20 km depth beneath seismic station AGIN, on the Anatolian plateau, while positive velocity gradients prevail elsewhere in eastern Turkey. Beneath station CSS, located in Cyprus, an anomalously slow layer is found in the uppermost mantle, which may indicate the presence of altered lithospheric material. Crustal P- and S-wave velocities beneath station D2, located in the northeastern portion of central Zagros, range between 5.2-6.2 and 3.2-3.8 km s-1, respectively. In Oman, we find a Moho depth of 34.0 ± 1.0 km and 25.0 ± 1.0 to 30.0 ± 1.0 km beneath stations S02 and S04, respectively.

  14. Probability matching in risky choice: the interplay of feedback and strategy availability.

    PubMed

    Newell, Ben R; Koehler, Derek J; James, Greta; Rakow, Tim; van Ravenzwaaij, Don

    2013-04-01

    Probability matching in sequential decision making is a striking violation of rational choice that has been observed in hundreds of experiments. Recent studies have demonstrated that matching persists even in described tasks in which all the information required for identifying a superior alternative strategy-maximizing-is present before the first choice is made. These studies have also indicated that maximizing increases when (1) the asymmetry in the availability of matching and maximizing strategies is reduced and (2) normatively irrelevant outcome feedback is provided. In the two experiments reported here, we examined the joint influences of these factors, revealing that strategy availability and outcome feedback operate on different time courses. Both behavioral and modeling results showed that while availability of the maximizing strategy increases the choice of maximizing early during the task, feedback appears to act more slowly to erode misconceptions about the task and to reinforce optimal responding. The results illuminate the interplay between "top-down" identification of choice strategies and "bottom-up" discovery of those strategies via feedback.

  15. Back to Normal! Gaussianizing posterior distributions for cosmological probes

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2014-05-01

    We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.

  16. Developing a Methodology for Eliciting Subjective Probability Estimates During Expert Evaluations of Safety Interventions: Application for Bayesian Belief Networks

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.a

    2005-01-01

    The NASA Aviation Safety Program (AvSP) has defined several products that will potentially modify airline and/or ATC operations, enhance aircraft systems, and improve the identification of potential hazardous situations within the National Airspace System (NAS). Consequently, there is a need to develop methods for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the judgments to develop Bayesian Belief Networks (BBN's) that model the potential impact that specific interventions may have. Specifically, the present report summarizes methodologies for improving the elicitation of probability estimates during expert evaluations of AvSP products for use in BBN's. The work involved joint efforts between Professor James Luxhoj from Rutgers University and researchers at the University of Illinois. The Rutgers' project to develop BBN's received funding by NASA entitled "Probabilistic Decision Support for Evaluating Technology Insertion and Assessing Aviation Safety System Risk." The proposed project was funded separately but supported the existing Rutgers' program.

  17. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  18. The Analysis of Adhesively Bonded Advanced Composite Joints Using Joint Finite Elements

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott E.; Waas, Anthony M.

    2012-01-01

    The design and sizing of adhesively bonded joints has always been a major bottleneck in the design of composite vehicles. Dense finite element (FE) meshes are required to capture the full behavior of a joint numerically, but these dense meshes are impractical in vehicle-scale models where a course mesh is more desirable to make quick assessments and comparisons of different joint geometries. Analytical models are often helpful in sizing, but difficulties arise in coupling these models with full-vehicle FE models. Therefore, a joint FE was created which can be used within structural FE models to make quick assessments of bonded composite joints. The shape functions of the joint FE were found by solving the governing equations for a structural model for a joint. By analytically determining the shape functions of the joint FE, the complex joint behavior can be captured with very few elements. This joint FE was modified and used to consider adhesives with functionally graded material properties to reduce the peel stress concentrations located near adherend discontinuities. Several practical concerns impede the actual use of such adhesives. These include increased manufacturing complications, alterations to the grading due to adhesive flow during manufacturing, and whether changing the loading conditions significantly impact the effectiveness of the grading. An analytical study is conducted to address these three concerns. Furthermore, proof-of-concept testing is conducted to show the potential advantages of functionally graded adhesives. In this study, grading is achieved by strategically placing glass beads within the adhesive layer at different densities along the joint. Furthermore, the capability to model non-linear adhesive constitutive behavior with large rotations was developed, and progressive failure of the adhesive was modeled by re-meshing the joint as the adhesive fails. Results predicted using the joint FE was compared with experimental results for various joint configurations, including double cantilever beam and single lap joints.

  19. Stochastic transport models for mixing in variable-density turbulence

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Ristorcelli, J. R.

    2011-11-01

    In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

  20. Estimating Density and Temperature Dependence of Juvenile Vital Rates Using a Hidden Markov Model

    PubMed Central

    McElderry, Robert M.

    2017-01-01

    Organisms in the wild have cryptic life stages that are sensitive to changing environmental conditions and can be difficult to survey. In this study, I used mark-recapture methods to repeatedly survey Anaea aidea (Nymphalidae) caterpillars in nature, then modeled caterpillar demography as a hidden Markov process to assess if temporal variability in temperature and density influence the survival and growth of A. aidea over time. Individual encounter histories result from the joint likelihood of being alive and observed in a particular stage, and I have included hidden states by separating demography and observations into parallel and independent processes. I constructed a demographic matrix containing the probabilities of all possible fates for each stage, including hidden states, e.g., eggs and pupae. I observed both dead and live caterpillars with high probability. Peak caterpillar abundance attracted multiple predators, and survival of fifth instars declined as per capita predation rate increased through spring. A time lag between predator and prey abundance was likely the cause of improved fifth instar survival estimated at high density. Growth rates showed an increase with temperature, but the preferred model did not include temperature. This work illustrates how state-space models can include unobservable stages and hidden state processes to evaluate how environmental factors influence vital rates of cryptic life stages in the wild. PMID:28505138

  1. Gaussian closure technique applied to the hysteretic Bouc model with non-zero mean white noise excitation

    NASA Astrophysics Data System (ADS)

    Waubke, Holger; Kasess, Christian H.

    2016-11-01

    Devices that emit structure-borne sound are commonly decoupled by elastic components to shield the environment from acoustical noise and vibrations. The elastic elements often have a hysteretic behavior that is typically neglected. In order to take hysteretic behavior into account, Bouc developed a differential equation for such materials, especially joints made of rubber or equipped with dampers. In this work, the Bouc model is solved by means of the Gaussian closure technique based on the Kolmogorov equation. Kolmogorov developed a method to derive probability density functions for arbitrary explicit first-order vector differential equations under white noise excitation using a partial differential equation of a multivariate conditional probability distribution. Up to now no analytical solution of the Kolmogorov equation in conjunction with the Bouc model exists. Therefore a wide range of approximate solutions, especially the statistical linearization, were developed. Using the Gaussian closure technique that is an approximation to the Kolmogorov equation assuming a multivariate Gaussian distribution an analytic solution is derived in this paper for the Bouc model. For the stationary case the two methods yield equivalent results, however, in contrast to statistical linearization the presented solution allows to calculate the transient behavior explicitly. Further, stationary case leads to an implicit set of equations that can be solved iteratively with a small number of iterations and without instabilities for specific parameter sets.

  2. Present developments in reaching an international consensus for a model-based approach to particle beam therapy.

    PubMed

    Prayongrat, Anussara; Umegaki, Kikuo; van der Schaaf, Arjen; Koong, Albert C; Lin, Steven H; Whitaker, Thomas; McNutt, Todd; Matsufuji, Naruhiro; Graves, Edward; Mizuta, Masahiko; Ogawa, Kazuhiko; Date, Hiroyuki; Moriwaki, Kensuke; Ito, Yoichi M; Kobashi, Keiji; Dekura, Yasuhiro; Shimizu, Shinichi; Shirato, Hiroki

    2018-03-01

    Particle beam therapy (PBT), including proton and carbon ion therapy, is an emerging innovative treatment for cancer patients. Due to the high cost of and limited access to treatment, meticulous selection of patients who would benefit most from PBT, when compared with standard X-ray therapy (XRT), is necessary. Due to the cost and labor involved in randomized controlled trials, the model-based approach (MBA) is used as an alternative means of establishing scientific evidence in medicine, and it can be improved continuously. Good databases and reasonable models are crucial for the reliability of this approach. The tumor control probability and normal tissue complication probability models are good illustrations of the advantages of PBT, but pre-existing NTCP models have been derived from historical patient treatments from the XRT era. This highlights the necessity of prospectively analyzing specific treatment-related toxicities in order to develop PBT-compatible models. An international consensus has been reached at the Global Institution for Collaborative Research and Education (GI-CoRE) joint symposium, concluding that a systematically developed model is required for model accuracy and performance. Six important steps that need to be observed in these considerations include patient selection, treatment planning, beam delivery, dose verification, response assessment, and data analysis. Advanced technologies in radiotherapy and computer science can be integrated to improve the efficacy of a treatment. Model validation and appropriately defined thresholds in a cost-effectiveness centered manner, together with quality assurance in the treatment planning, have to be achieved prior to clinical implementation.

  3. Generative adversarial networks for brain lesion detection

    NASA Astrophysics Data System (ADS)

    Alex, Varghese; Safwan, K. P. Mohammed; Chennamsetty, Sai Saketh; Krishnamurthi, Ganapathy

    2017-02-01

    Manual segmentation of brain lesions from Magnetic Resonance Images (MRI) is cumbersome and introduces errors due to inter-rater variability. This paper introduces a semi-supervised technique for detection of brain lesion from MRI using Generative Adversarial Networks (GANs). GANs comprises of a Generator network and a Discriminator network which are trained simultaneously with the objective of one bettering the other. The networks were trained using non lesion patches (n=13,000) from 4 different MR sequences. The network was trained on BraTS dataset and patches were extracted from regions excluding tumor region. The Generator network generates data by modeling the underlying probability distribution of the training data, (PData). The Discriminator learns the posterior probability P (Label Data) by classifying training data and generated data as "Real" or "Fake" respectively. The Generator upon learning the joint distribution, produces images/patches such that the performance of the Discriminator on them are random, i.e. P (Label Data = GeneratedData) = 0.5. During testing, the Discriminator assigns posterior probability values close to 0.5 for patches from non lesion regions, while patches centered on lesion arise from a different distribution (PLesion) and hence are assigned lower posterior probability value by the Discriminator. On the test set (n=14), the proposed technique achieves whole tumor dice score of 0.69, sensitivity of 91% and specificity of 59%. Additionally the generator network was capable of generating non lesion patches from various MR sequences.

  4. [Influence of Restricting the Ankle Joint Complex Motions on Gait Stability of Human Body].

    PubMed

    Li, Yang; Zhang, Junxia; Su, Hailong; Wang, Xinting; Zhang, Yan

    2016-10-01

    The purpose of this study is to determine how restricting inversion-eversion and pronation-supination motions of the ankle joint complex influences the stability of human gait.The experiment was carried out on a slippery level ground walkway.Spatiotemporal gait parameter,kinematics and kinetics data as well as utilized coefficient of friction(UCOF)were compared between two conditions,i.e.with restriction of the ankle joint complex inversion-eversion and pronation-supination motions(FIXED)and without restriction(FREE).The results showed that FIXED could lead to a significant increase in velocity and stride length and an obvious decrease in double support time.Furthermore,FIXED might affect the motion angle range of knee joint and ankle joint in the sagittal plane.In FIXED condition,UCOF was significantly increased,which could lead to an increase of slip probability and a decrease of gait stability.Hence,in the design of a walker,bipedal robot or prosthetic,the structure design which is used to achieve the ankle joint complex inversion-eversion and pronation-supination motions should be implemented.

  5. Species abundance distribution and population dynamics in a two-community model of neutral ecology

    NASA Astrophysics Data System (ADS)

    Vallade, M.; Houchmandzadeh, B.

    2006-11-01

    Explicit formulas for the steady-state distribution of species in two interconnected communities of arbitrary sizes are derived in the framework of Hubbell’s neutral model of biodiversity. Migrations of seeds from both communities as well as mutations in both of them are taken into account. These results generalize those previously obtained for the “island-continent” model and they allow an analysis of the influence of the ratio of the sizes of the two communities on the dominance/diversity equilibrium. Exact expressions for species abundance distributions are deduced from a master equation for the joint probability distribution of species in the two communities. Moreover, an approximate self-consistent solution is derived. It corresponds to a generalization of previous results and it proves to be accurate over a broad range of parameters. The dynamical correlations between the abundances of a species in both communities are also discussed.

  6. NESTOR: A Computer-Based Medical Diagnostic Aid That Integrates Causal and Probabilistic Knowledge.

    DTIC Science & Technology

    1984-11-01

    indiidual conditional probabilities between one cause node and its effect node, but less common to know a joint conditional probability between a...PERFOAMING ORG. REPORT NUMBER * 7. AUTI4ORs) O Gregory F. Cooper 1 CONTRACT OR GRANT NUMBERIa) ONR N00014-81-K-0004 g PERFORMING ORGANIZATION NAME AND...ADDRESS 10. PROGRAM ELEMENT, PROJECT. TASK Department of Computer Science AREA & WORK UNIT NUMBERS Stanford University Stanford, CA 94305 USA 12. REPORT

  7. Biomechanical Tolerance of Calcaneal Fractures

    PubMed Central

    Yoganandan, Narayan; Pintar, Frank A.; Gennarelli, Thomas A.; Seipel, Robert; Marks, Richard

    1999-01-01

    Biomechanical studies have been conducted in the past to understand the mechanisms of injury to the foot-ankle complex. However, statistically based tolerance criteria for calcaneal complex injuries are lacking. Consequently, this research was designed to derive a probability distribution that represents human calcaneal tolerance under impact loading such as those encountered in vehicular collisions. Information for deriving the distribution was obtained by experiments on unembalmed human cadaver lower extremities. Briefly, the protocol included the following. The knee joint was disarticulated such that the entire lower extremity distal to the knee joint remained intact. The proximal tibia was fixed in polymethylmethacrylate. The specimens were aligned and impact loading was applied using mini-sled pendulum equipment. The pendulum impactor dynamically loaded the plantar aspect of the foot once. Following the test, specimens were palpated and radiographs in multiple planes were obtained. Injuries were classified into no fracture, and extra-and intra-articular fractures of the calcaneus. There were 14 cases of no injury and 12 cases of calcaneal fracture. The fracture forces (mean: 7802 N) were significantly different (p<0.01) from the forces in the no injury (mean: 4144 N) group. The probability of calcaneal fracture determined using logistic regression indicated that a force of 6.2 kN corresponds to 50 percent probability of calcaneal fracture. The derived probability distribution is useful in the design of dummies and vehicular surfaces.

  8. A new method of scoring radiographic change in rheumatoid arthritis.

    PubMed

    Rau, R; Wassenberg, S; Herborn, G; Stucki, G; Gebler, A

    1998-11-01

    To test the reliability and to define the minimal detectable change of a new radiographic scoring method in rheumatoid arthritis (RA). Following the recommendations of an expert panel a new radiographic scoring method was defined. It scores 38 joints [all proximal interphalangeal (PIP) and metacarpophalangeal joints, 4 sites in the wrists, IP of the great toes, and metatarsophalangeals 2 to 5], regarding only the amount of joint surface destruction on a 0 to 5 scale for each joint. Each grade represents 20% of joint surface destruction. The method was tested by 5 readers on a set of 7 serial radiographs of hands and forefeet of 20 patients with progressive and destructive RA. Analysis of variance was performed, as it provides the best information about the capability of a method to detect real change and to define its sensitivity according to the minimal detectable change. Analysis of variance proved a high probability that the readers found real change with a ratio of intrapatient to intrareader standard deviation of 2.6. It also confirmed that one reader could detect a change of 3.5% of the total score with a probability of 95% and that different readers agreed upon a change of 4.6%. Inexperienced readers performed with comparable results to experienced readers. The time required for the reading averaged less than 10 minutes for the scoring of one set. The new radiographic scoring method proved to be reliable, precise, and easy to learn, with reasonable cost. Compared to published data, it may provide better results than the widely used Larsen score. These features favor our new method for use in clinical trials and in longterm observational studies in RA.

  9. Bilateral coxofemoral degenerative joint disease in a juvenile male yellow-eyed penguin (Megadyptes antipodes).

    PubMed

    Buckle, Kelly N; Alley, Maurice R

    2011-08-01

    A juvenile, male, yellow-eyed penguin (Megadyptes antipodes) with abnormal stance and decreased mobility was captured, held in captivity for approximately 6 weeks, and euthanized due to continued clinical signs. Radiographically, there was bilateral degenerative joint disease with coxofemoral periarticular osteophyte formation. Grossly, the bird had bilaterally distended, thickened coxofemoral joints with increased laxity, and small, roughened and angular femoral heads. Histologically, the left femoral articular cartilage and subchondral bone were absent, and the remaining femoral head consisted of trabecular bone overlain by fibrin and granulation tissue. There was no gross or histological evidence of infection. The historic, gross, radiographic, and histopathologic findings were most consistent with bilateral aseptic femoral head degeneration resulting in degenerative joint disease. Although the chronicity of the lesions masked the initiating cause, the probable underlying causes of aseptic bilateral femoral head degeneration in a young animal are osteonecrosis and osteochondrosis of the femoral head. To our knowledge, this is the first reported case of bilateral coxofemoral degenerative joint disease in a penguin.

  10. Finite element model updating of riveted joints of simplified model aircraft structure

    NASA Astrophysics Data System (ADS)

    Yunus, M. A.; Rani, M. N. Abdul; Sani, M. S. M.; Shah, M. A. S. Aziz

    2018-04-01

    Thin metal sheets are widely used to fabricate a various type of aerospace structures because of its flexibility and easily to form into any type shapes of structure. The riveted joint has turn out to be one of the popular joint types in jointing the aerospace structures because they can be easily be disassembled, maintained and inspected. In this paper, thin metal sheet components are assembled together via riveted joints to form a simplified model of aerospace structure. However, to model the jointed structure that are attached together via the mechanical joints such as riveted joint are very difficult due to local effects. Understandably that the dynamic characteristic of the joined structure can be significantly affected by these joints due to local effects at the mating areas of the riveted joints such as surface contact, clamping force and slips. A few types of element connectors that available in MSC NATRAN/PATRAN have investigated in order to presented as the rivet joints. Thus, the results obtained in term of natural frequencies and mode shapes are then contrasted with experimental counterpart in order to investigate the acceptance level of accuracy between element connectors that are used in modelling the rivet joints of the riveted joints structure. The reconciliation method via finiteelement model updating is used to minimise the discrepancy of the initial finite element model of the riveted joined structure as close as experimental data and their results are discussed.

  11. On Models for Binomial Data with Random Numbers of Trials

    PubMed Central

    Comulada, W. Scott; Weiss, Robert E.

    2010-01-01

    Summary A binomial outcome is a count s of the number of successes out of the total number of independent trials n = s + f, where f is a count of the failures. The n are random variables not fixed by design in many studies. Joint modeling of (s, f) can provide additional insight into the science and into the probability π of success that cannot be directly incorporated by the logistic regression model. Observations where n = 0 are excluded from the binomial analysis yet may be important to understanding how π is influenced by covariates. Correlation between s and f may exist and be of direct interest. We propose Bayesian multivariate Poisson models for the bivariate response (s, f), correlated through random effects. We extend our models to the analysis of longitudinal and multivariate longitudinal binomial outcomes. Our methodology was motivated by two disparate examples, one from teratology and one from an HIV tertiary intervention study. PMID:17688514

  12. Partial least squares density modeling (PLS-DM) - a new class-modeling strategy applied to the authentication of olives in brine by near-infrared spectroscopy.

    PubMed

    Oliveri, Paolo; López, M Isabel; Casolino, M Chiara; Ruisánchez, Itziar; Callao, M Pilar; Medini, Luca; Lanteri, Silvia

    2014-12-03

    A new class-modeling method, referred to as partial least squares density modeling (PLS-DM), is presented. The method is based on partial least squares (PLS), using a distance-based sample density measurement as the response variable. Potential function probability density is subsequently calculated on PLS scores and used, jointly with residual Q statistics, to develop efficient class models. The influence of adjustable model parameters on the resulting performances has been critically studied by means of cross-validation and application of the Pareto optimality criterion. The method has been applied to verify the authenticity of olives in brine from cultivar Taggiasca, based on near-infrared (NIR) spectra recorded on homogenized solid samples. Two independent test sets were used for model validation. The final optimal model was characterized by high efficiency and equilibrate balance between sensitivity and specificity values, if compared with those obtained by application of well-established class-modeling methods, such as soft independent modeling of class analogy (SIMCA) and unequal dispersed classes (UNEQ). Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    NASA Astrophysics Data System (ADS)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  14. Predicting coastal cliff erosion using a Bayesian probabilistic model

    USGS Publications Warehouse

    Hapke, Cheryl J.; Plant, Nathaniel G.

    2010-01-01

    Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70–90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale.

  15. [Comments on the use of the "life-table method" in orthopedics].

    PubMed

    Hassenpflug, J; Hahne, H J; Hedderich, J

    1992-01-01

    In the description of long term results, e.g. of joint replacements, survivorship analysis is used increasingly in orthopaedic surgery. The survivorship analysis is more useful to describe the frequency of failure rather than global statements in percentage. The relative probability of failure for fixed intervals is drawn from the number of controlled patients and the frequency of failure. The complementary probabilities of success are linked in their temporal sequence thus representing the probability of survival at a fixed endpoint. Necessary condition for the use of this procedure is the exact definition of moment and manner of failure. It is described how to establish survivorship tables.

  16. Movement patterns and study area boundaries: Influences on survival estimation in capture-mark-recapture studies

    USGS Publications Warehouse

    Horton, G.E.; Letcher, B.H.

    2008-01-01

    The inability to account for the availability of individuals in the study area during capture-mark-recapture (CMR) studies and the resultant confounding of parameter estimates can make correct interpretation of CMR model parameter estimates difficult. Although important advances based on the Cormack-Jolly-Seber (CJS) model have resulted in estimators of true survival that work by unconfounding either death or recapture probability from availability for capture in the study area, these methods rely on the researcher's ability to select a method that is correctly matched to emigration patterns in the population. If incorrect assumptions regarding site fidelity (non-movement) are made, it may be difficult or impossible as well as costly to change the study design once the incorrect assumption is discovered. Subtleties in characteristics of movement (e.g. life history-dependent emigration, nomads vs territory holders) can lead to mixtures in the probability of being available for capture among members of the same population. The result of these mixtures may be only a partial unconfounding of emigration from other CMR model parameters. Biologically-based differences in individual movement can combine with constraints on study design to further complicate the problem. Because of the intricacies of movement and its interaction with other parameters in CMR models, quantification of and solutions to these problems are needed. Based on our work with stream-dwelling populations of Atlantic salmon Salmo salar, we used a simulation approach to evaluate existing CMR models under various mixtures of movement probabilities. The Barker joint data model provided unbiased estimates of true survival under all conditions tested. The CJS and robust design models provided similarly unbiased estimates of true survival but only when emigration information could be incorporated directly into individual encounter histories. For the robust design model, Markovian emigration (future availability for capture depends on an individual's current location) was a difficult emigration pattern to detect unless survival and especially recapture probability were high. Additionally, when local movement was high relative to study area boundaries and movement became more diffuse (e.g. a random walk), local movement and permanent emigration were difficult to distinguish and had consequences for correctly interpreting the survival parameter being estimated (apparent survival vs true survival). ?? 2008 The Authors.

  17. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    PubMed

    Seaman, Shaun R; Hughes, Rachael A

    2018-06-01

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.

  18. LOW ACTIVATION JOINING OF SIC/SIC COMPOSITES FOR FUSION APPLICATIONS: MODELING DUAL-PHASE MICROSTRUCTURES AND DISSIMILAR MATERIAL JOINTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henager, Charles H.; Nguyen, Ba Nghiep; Kurtz, Richard J.

    2016-03-31

    Finite element continuum damage models (FE-CDM) have been developed to simulate and model dual-phase joints and cracked joints for improved analysis of SiC materials in nuclear environments. This report extends the analysis from the last reporting cycle by including results from dual-phase models and from cracked joint models.

  19. Linking morphology and motion: a test of a four-bar mechanism in seahorses.

    PubMed

    Roos, Gert; Leysen, Heleen; Van Wassenbergh, Sam; Herrel, Anthony; Jacobs, Patric; Dierick, Manuel; Aerts, Peter; Adriaens, Dominique

    2009-01-01

    Syngnathid fishes (seahorses, pipefish, and sea dragons) possess a highly modified cranium characterized by a long and tubular snout with minute jaws at its end. Previous studies indicated that these species are extremely fast suction feeders with their feeding strike characterized by a rapid elevation of the head accompanied by rotation of the hyoid. A planar four-bar model is proposed to explain the coupled motion of the neurocranium and the hyoid. Because neurocranial elevation as well as hyoid rotation are crucial for the feeding mechanism in previously studied Syngnathidae, a detailed evaluation of this model is needed. In this study, we present kinematic data of the feeding strike in the seahorse Hippocampus reidi. We combined these data with a detailed morphological analysis of the important linkages and joints involved in rotation of the neurocranium and the hyoid, and we compared the kinematic measurements with output of a theoretical four-bar model. The kinematic analysis shows that neurocranial rotation never preceded hyoid rotation, thus indicating that hyoid rotation triggers the explosive feeding strike. Our data suggest that while neurocranium and hyoid initially (first 1.5 ms) behave as predicted by the four-bar model, eventually, the hyoid rotation is underestimated by the model. Shortening, or a posterior displacement of the sternohyoid muscle (of which the posterior end is confluent with the hypaxial muscles in H. reidi), probably explains the discrepancy between the model and our kinematic measurements. As a result, while four-bar modeling indicates a clear coupling between hyoid rotation and neurocranial elevation, the detailed morphological determination of the linkages and joints of this four-bar model remain crucial in order to fully understand this mechanism in seahorse feeding.

  20. Maintaining homeostasis by decision-making.

    PubMed

    Korn, Christoph W; Bach, Dominik R

    2015-05-01

    Living organisms need to maintain energetic homeostasis. For many species, this implies taking actions with delayed consequences. For example, humans may have to decide between foraging for high-calorie but hard-to-get, and low-calorie but easy-to-get food, under threat of starvation. Homeostatic principles prescribe decisions that maximize the probability of sustaining appropriate energy levels across the entire foraging trajectory. Here, predictions from biological principles contrast with predictions from economic decision-making models based on maximizing the utility of the endpoint outcome of a choice. To empirically arbitrate between the predictions of biological and economic models for individual human decision-making, we devised a virtual foraging task in which players chose repeatedly between two foraging environments, lost energy by the passage of time, and gained energy probabilistically according to the statistics of the environment they chose. Reaching zero energy was framed as starvation. We used the mathematics of random walks to derive endpoint outcome distributions of the choices. This also furnished equivalent lotteries, presented in a purely economic, casino-like frame, in which starvation corresponded to winning nothing. Bayesian model comparison showed that--in both the foraging and the casino frames--participants' choices depended jointly on the probability of starvation and the expected endpoint value of the outcome, but could not be explained by economic models based on combinations of statistical moments or on rank-dependent utility. This implies that under precisely defined constraints biological principles are better suited to explain human decision-making than economic models based on endpoint utility maximization.

  1. Maintaining Homeostasis by Decision-Making

    PubMed Central

    Korn, Christoph W.; Bach, Dominik R.

    2015-01-01

    Living organisms need to maintain energetic homeostasis. For many species, this implies taking actions with delayed consequences. For example, humans may have to decide between foraging for high-calorie but hard-to-get, and low-calorie but easy-to-get food, under threat of starvation. Homeostatic principles prescribe decisions that maximize the probability of sustaining appropriate energy levels across the entire foraging trajectory. Here, predictions from biological principles contrast with predictions from economic decision-making models based on maximizing the utility of the endpoint outcome of a choice. To empirically arbitrate between the predictions of biological and economic models for individual human decision-making, we devised a virtual foraging task in which players chose repeatedly between two foraging environments, lost energy by the passage of time, and gained energy probabilistically according to the statistics of the environment they chose. Reaching zero energy was framed as starvation. We used the mathematics of random walks to derive endpoint outcome distributions of the choices. This also furnished equivalent lotteries, presented in a purely economic, casino-like frame, in which starvation corresponded to winning nothing. Bayesian model comparison showed that—in both the foraging and the casino frames—participants’ choices depended jointly on the probability of starvation and the expected endpoint value of the outcome, but could not be explained by economic models based on combinations of statistical moments or on rank-dependent utility. This implies that under precisely defined constraints biological principles are better suited to explain human decision-making than economic models based on endpoint utility maximization. PMID:26024504

  2. Joint Interpretation of Magnetotelluric and Gravimetric Data from the South American Paraná Basin

    NASA Astrophysics Data System (ADS)

    Santos, E. B.; Santos, H. B.; Vitorello, I.; Pádua, M. B.

    2013-05-01

    The Paraná Basin is a large sedimentary basin in central-eastern South America that extends through Brazil, Paraguay, Uruguay and Argentina. Evolved completely over the South American continental crust, this Paleozoic basin is filled with sedimentary and volcanic rocks deposited from the Silurian to the Cretaceous, when a significant basaltic effusion covered almost the entire area of the basin. A series of superposed sedimentary and volcanic rock layers were laid down under the influence of different tectonic settings, probably originated from distant collisional dynamics of continental boards that led to the amalgamation of Gondwanaland. The current boundaries of the basin can be the result of issuing erosional or of tectonic origin, such as the building up of large arches and faults. To evaluate the deep structural architecture of the lithosphere under a sedimentary basin is a great challenge, requiring the integration of different geophysical and geological studies. In this paper, we present the resulting Paraná Basin lithospheric model, obtained from processing and inversion of broadband and long-period magnetotelluric soundings along an E-W profile across the central part of the basin, complemented by a qualitative joint interpretation of gravimetric data, in order to obtain a more precise geoelectric model of the deep structure of the region.

  3. Modeling Progressive Failure of Bonded Joints Using a Single Joint Finite Element

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott E.; Waas, Anthony M.; Bednarcyk, Brett A.

    2010-01-01

    Enhanced finite elements are elements with an embedded analytical solution which can capture detailed local fields, enabling more efficient, mesh-independent finite element analysis. In the present study, an enhanced finite element is applied to generate a general framework capable of modeling an array of joint types. The joint field equations are derived using the principle of minimum potential energy, and the resulting solutions for the displacement fields are used to generate shape functions and a stiffness matrix for a single joint finite element. This single finite element thus captures the detailed stress and strain fields within the bonded joint, but it can function within a broader structural finite element model. The costs associated with a fine mesh of the joint can thus be avoided while still obtaining a detailed solution for the joint. Additionally, the capability to model non-linear adhesive constitutive behavior has been included within the method, and progressive failure of the adhesive can be modeled by using a strain-based failure criteria and re-sizing the joint as the adhesive fails. Results of the model compare favorably with experimental and finite element results.

  4. Estimate of Probability of Crack Detection from Service Difficulty Report Data.

    DOT National Transportation Integrated Search

    1995-09-01

    The initiation and growth of cracks in a fuselage lap joint were simulated. Stochastic distribution of crack initiation and rivet interference were included. The simulation also contained a simplified crack growth. Nominal crack growth behavior of la...

  5. Estimate of probability of crack detection from service difficulty report data

    DOT National Transportation Integrated Search

    1994-09-01

    The initiation and growth of cracks in a fuselage lap joint were simulated. Stochastic distribution of crack initiation and rivet interference were included. The simulation also contained a simplified crack growth. Nominal crack growth behavior of la...

  6. A short walk in quantum probability

    NASA Astrophysics Data System (ADS)

    Hudson, Robin

    2018-04-01

    This is a personal survey of aspects of quantum probability related to the Heisenberg commutation relation for canonical pairs. Using the failure, in general, of non-negativity of the Wigner distribution for canonical pairs to motivate a more satisfactory quantum notion of joint distribution, we visit a central limit theorem for such pairs and a resulting family of quantum planar Brownian motions which deform the classical planar Brownian motion, together with a corresponding family of quantum stochastic areas. This article is part of the themed issue `Hilbert's sixth problem'.

  7. Joint Information Theoretic and Differential Geometrical Approach for Robust Automated Target Recognition

    DTIC Science & Technology

    2012-02-29

    surface and Swiss roll) and real-world data sets (UCI Machine Learning Repository [12] and USPS digit handwriting data). In our experiments, we use...less than µn ( say µ = 0.8), we can first use screening technique to select µn candidate nodes, and then apply BIPS on them for further selection and...identified from node j to node i. So we can say the probability for the existence of this connection is approximately 82%. Given the probability matrix

  8. Transport of passive scalars in a turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Kim, John; Moin, Parviz

    1987-01-01

    A direct numerical simulation of a turbulent channel flow with three passive scalars at different molecular Prandtl numbers is performed. Computed statistics including the turbulent Prandtl numbers are compared with existing experimental data. The computed fields are also examined to investigate the spatial structure of the scalar fields. The scalar fields are highly correlated with the streamwise velocity; the correlation coefficient between the temperature and the streamwise velocity is as high as 0.95 in the wall region. The joint probability distributions between the temperature and velocity fluctuations are also examined; they suggest that it might be possible to model the scalar fluxes in the wall region in a manner similar to the Reynolds stresses.

  9. Classification with spatio-temporal interpixel class dependency contexts

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, David A.

    1992-01-01

    A contextual classifier which can utilize both spatial and temporal interpixel dependency contexts is investigated. After spatial and temporal neighbors are defined, a general form of maximum a posterior spatiotemporal contextual classifier is derived. This contextual classifier is simplified under several assumptions. Joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Gibbs random field. The classification is performed in a recursive manner to allow a computationally efficient contextual classification. Experimental results with bitemporal TM data show significant improvement of classification accuracy over noncontextual pixelwise classifiers. This spatiotemporal contextual classifier should find use in many applications of remote sensing, especially when the classification accuracy is important.

  10. Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin

    2016-04-01

    Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.

  11. Numerical Model for the Study of the Strength and Failure Modes of Rock Containing Non-Persistent Joints

    NASA Astrophysics Data System (ADS)

    Vergara, Maximiliano R.; Van Sint Jan, Michel; Lorig, Loren

    2016-04-01

    The mechanical behavior of rock containing parallel non-persistent joint sets was studied using a numerical model. The numerical analysis was performed using the discrete element software UDEC. The use of fictitious joints allowed the inclusion of non-persistent joints in the model domain and simulating the progressive failure due to propagation of existing fractures. The material and joint mechanical parameters used in the model were obtained from experimental results. The results of the numerical model showed good agreement with the strength and failure modes observed in the laboratory. The results showed the large anisotropy in the strength resulting from variation of the joint orientation. Lower strength of the specimens was caused by the coalescence of fractures belonging to parallel joint sets. A correlation was found between geometrical parameters of the joint sets and the contribution of the joint sets strength in the global strength of the specimen. The results suggest that for the same dip angle with respect to the principal stresses; the uniaxial strength depends primarily on the joint spacing and the angle between joints tips and less on the length of the rock bridges (persistency). A relation between joint geometrical parameters was found from which the resulting failure mode can be predicted.

  12. Stochastic Inversion of 2D Magnetotelluric Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong

    2010-07-01

    The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, itmore » provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less

  13. A Bayesian method for inferring transmission chains in a partially observed epidemic.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef M.; Ray, Jaideep

    2008-10-01

    We present a Bayesian approach for estimating transmission chains and rates in the Abakaliki smallpox epidemic of 1967. The epidemic affected 30 individuals in a community of 74; only the dates of appearance of symptoms were recorded. Our model assumes stochastic transmission of the infections over a social network. Distinct binomial random graphs model intra- and inter-compound social connections, while disease transmission over each link is treated as a Poisson process. Link probabilities and rate parameters are objects of inference. Dates of infection and recovery comprise the remaining unknowns. Distributions for smallpox incubation and recovery periods are obtained from historicalmore » data. Using Markov chain Monte Carlo, we explore the joint posterior distribution of the scalar parameters and provide an expected connectivity pattern for the social graph and infection pathway.« less

  14. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  15. Computational Modelling and Movement Analysis of Hip Joint with Muscles

    NASA Astrophysics Data System (ADS)

    Siswanto, W. A.; Yoon, C. C.; Salleh, S. Md.; Ngali, M. Z.; Yusup, Eliza M.

    2017-01-01

    In this study, the model of hip joint and the main muscles are modelled by finite elements. The parts included in the model are hip joint, hemi pelvis, gluteus maximus, quadratus femoris and gamellus inferior. The materials that used in these model are isotropic elastic, Mooney Rivlin and Neo-hookean. The hip resultant force of the normal gait and stair climbing are applied on the model of hip joint. The responses of displacement, stress and strain of the muscles are then recorded. FEBio non-linear solver for biomechanics is employed to conduct the simulation of the model of hip joint with muscles. The contact interfaces that used in this model are sliding contact and tied contact. From the analysis results, the gluteus maximus has the maximum displacement, stress and strain in the stair climbing. Quadratus femoris and gamellus inferior has the maximum displacement and strain in the normal gait however the maximum stress in the stair climbing. Besides that, the computational model of hip joint with muscles is produced for research and investigation platform. The model can be used as a visualization platform of hip joint.

  16. A parameters optimization method for planar joint clearance model and its application for dynamics simulation of reciprocating compressor

    NASA Astrophysics Data System (ADS)

    Hai-yang, Zhao; Min-qiang, Xu; Jin-dong, Wang; Yong-bo, Li

    2015-05-01

    In order to improve the accuracy of dynamics response simulation for mechanism with joint clearance, a parameter optimization method for planar joint clearance contact force model was presented in this paper, and the optimized parameters were applied to the dynamics response simulation for mechanism with oversized joint clearance fault. By studying the effect of increased clearance on the parameters of joint clearance contact force model, the relation of model parameters between different clearances was concluded. Then the dynamic equation of a two-stage reciprocating compressor with four joint clearances was developed using Lagrange method, and a multi-body dynamic model built in ADAMS software was used to solve this equation. To obtain a simulated dynamic response much closer to that of experimental tests, the parameters of joint clearance model, instead of using the designed values, were optimized by genetic algorithms approach. Finally, the optimized parameters were applied to simulate the dynamics response of model with oversized joint clearance fault according to the concluded parameter relation. The dynamics response of experimental test verified the effectiveness of this application.

  17. The quest for conditional independence in prospectivity modeling: weights-of-evidence, boost weights-of-evidence, and logistic regression

    NASA Astrophysics Data System (ADS)

    Schaeben, Helmut; Semmler, Georg

    2016-09-01

    The objective of prospectivity modeling is prediction of the conditional probability of the presence T = 1 or absence T = 0 of a target T given favorable or prohibitive predictors B, or construction of a two classes 0,1 classification of T. A special case of logistic regression called weights-of-evidence (WofE) is geologists' favorite method of prospectivity modeling due to its apparent simplicity. However, the numerical simplicity is deceiving as it is implied by the severe mathematical modeling assumption of joint conditional independence of all predictors given the target. General weights of evidence are explicitly introduced which are as simple to estimate as conventional weights, i.e., by counting, but do not require conditional independence. Complementary to the regression view is the classification view on prospectivity modeling. Boosting is the construction of a strong classifier from a set of weak classifiers. From the regression point of view it is closely related to logistic regression. Boost weights-of-evidence (BoostWofE) was introduced into prospectivity modeling to counterbalance violations of the assumption of conditional independence even though relaxation of modeling assumptions with respect to weak classifiers was not the (initial) purpose of boosting. In the original publication of BoostWofE a fabricated dataset was used to "validate" this approach. Using the same fabricated dataset it is shown that BoostWofE cannot generally compensate lacking conditional independence whatever the consecutively processing order of predictors. Thus the alleged features of BoostWofE are disproved by way of counterexamples, while theoretical findings are confirmed that logistic regression including interaction terms can exactly compensate violations of joint conditional independence if the predictors are indicators.

  18. Developing the fuzzy c-means clustering algorithm based on maximum entropy for multitarget tracking in a cluttered environment

    NASA Astrophysics Data System (ADS)

    Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing

    2018-01-01

    For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.

  19. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClanahan, Richard; De Leon, Phillip L.

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

  20. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    DOE PAGES

    McClanahan, Richard; De Leon, Phillip L.

    2014-08-20

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

  1. Mutation-selection balance in mixed mating populations.

    PubMed

    Kelly, John K

    2007-05-21

    An approximation to the average number of deleterious mutations per gamete, Q, is derived from a model allowing selection on both zygotes and male gametes. Progeny are produced by either outcrossing or self-fertilization with fixed probabilities. The genetic model is a standard in evolutionary biology: mutations occur at unlinked loci, have equivalent effects, and combine multiplicatively to determine fitness. The approximation developed here treats individual mutation counts with a generalized Poisson model conditioned on the distribution of selfing histories in the population. The approximation is accurate across the range of parameter sets considered and provides both analytical insights and greatly increased computational speed. Model predictions are discussed in relation to several outstanding problems, including the estimation of the genomic deleterious mutation rates (U), the generality of "selective interference" among loci, and the consequences of gametic selection for the joint distribution of inbreeding depression and mating system across species. Finally, conflicting results from previous analytical treatments of mutation-selection balance are resolved to assumptions about the life-cycle and the initial fate of mutations.

  2. Hydrologic risk analysis in the Yangtze River basin through coupling Gaussian mixtures into copulas

    NASA Astrophysics Data System (ADS)

    Fan, Y. R.; Huang, W. W.; Huang, G. H.; Li, Y. P.; Huang, K.; Li, Z.

    2016-02-01

    In this study, a bivariate hydrologic risk framework is proposed through coupling Gaussian mixtures into copulas, leading to a coupled GMM-copula method. In the coupled GMM-Copula method, the marginal distributions of flood peak, volume and duration are quantified through Gaussian mixture models and the joint probability distributions of flood peak-volume, peak-duration and volume-duration are established through copulas. The bivariate hydrologic risk is then derived based on the joint return period of flood variable pairs. The proposed method is applied to the risk analysis for the Yichang station on the main stream of the Yangtze River, China. The results indicate that (i) the bivariate risk for flood peak-volume would keep constant for the flood volume less than 1.0 × 105 m3/s day, but present a significant decreasing trend for the flood volume larger than 1.7 × 105 m3/s day; and (ii) the bivariate risk for flood peak-duration would not change significantly for the flood duration less than 8 days, and then decrease significantly as duration value become larger. The probability density functions (pdfs) of the flood volume and duration conditional on flood peak can also be generated through the fitted copulas. The results indicate that the conditional pdfs of flood volume and duration follow bimodal distributions, with the occurrence frequency of the first vertex decreasing and the latter one increasing as the increase of flood peak. The obtained conclusions from the bivariate hydrologic analysis can provide decision support for flood control and mitigation.

  3. Risk of false decision on conformity of a multicomponent material when test results of the components' content are correlated.

    PubMed

    Kuselman, Ilya; Pennecchi, Francesca R; da Silva, Ricardo J N B; Hibbert, D Brynn

    2017-11-01

    The probability of a false decision on conformity of a multicomponent material due to measurement uncertainty is discussed when test results are correlated. Specification limits of the components' content of such a material generate a multivariate specification interval/domain. When true values of components' content and corresponding test results are modelled by multivariate distributions (e.g. by multivariate normal distributions), a total global risk of a false decision on the material conformity can be evaluated based on calculation of integrals of their joint probability density function. No transformation of the raw data is required for that. A total specific risk can be evaluated as the joint posterior cumulative function of true values of a specific batch or lot lying outside the multivariate specification domain, when the vector of test results, obtained for the lot, is inside this domain. It was shown, using a case study of four components under control in a drug, that the correlation influence on the risk value is not easily predictable. To assess this influence, the evaluated total risk values were compared with those calculated for independent test results and also with those assuming much stronger correlation than that observed. While the observed statistically significant correlation did not lead to a visible difference in the total risk values in comparison to the independent test results, the stronger correlation among the variables caused either the total risk decreasing or its increasing, depending on the actual values of the test results. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Meta-analysis of the effect of natural frequencies on Bayesian reasoning.

    PubMed

    McDowell, Michelle; Jacobs, Perke

    2017-12-01

    The natural frequency facilitation effect describes the finding that people are better able to solve descriptive Bayesian inference tasks when represented as joint frequencies obtained through natural sampling, known as natural frequencies, than as conditional probabilities. The present meta-analysis reviews 20 years of research seeking to address when, why, and for whom natural frequency formats are most effective. We review contributions from research associated with the 2 dominant theoretical perspectives, the ecological rationality framework and nested-sets theory, and test potential moderators of the effect. A systematic review of relevant literature yielded 35 articles representing 226 performance estimates. These estimates were statistically integrated using a bivariate mixed-effects model that yields summary estimates of average performances across the 2 formats and estimates of the effects of different study characteristics on performance. These study characteristics range from moderators representing individual characteristics (e.g., numeracy, expertise), to methodological differences (e.g., use of incentives, scoring criteria) and features of problem representation (e.g., short menu format, visual aid). Short menu formats (less computationally complex representations showing joint-events) and visual aids demonstrated some of the strongest moderation effects, improving performance for both conditional probability and natural frequency formats. A number of methodological factors (e.g., exposure to both problem formats) were also found to affect performance rates, emphasizing the importance of a systematic approach. We suggest how research on Bayesian reasoning can be strengthened by broadening the definition of successful Bayesian reasoning to incorporate choice and process and by applying different research methodologies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Aquatic predicted no-effect concentrations of 16 polycyclic aromatic hydrocarbons and their ecological risks in surface seawater of Liaodong Bay, China.

    PubMed

    Wang, Ying; Wang, Juying; Mu, Jingli; Wang, Zhen; Cong, Yi; Yao, Ziwei; Lin, Zhongsheng

    2016-06-01

    Polycyclic aromatic hydrocarbons (PAHs), a class of ubiquitous pollutants in marine environments, exhibit moderate to high adverse effects on aquatic organisms and humans. However, the lack of PAH toxicity data for aquatic organism has limited evaluation of their ecological risks. In the present study, aquatic predicted no-effect concentrations (PNECs) of 16 priority PAHs were derived based on species sensitivity distribution models, and their probabilistic ecological risks in seawater of Liaodong Bay, Bohai Sea, China, were assessed. A quantitative structure-activity relationship method was adopted to achieve the predicted chronic toxicity data for the PNEC derivation. Good agreement for aquatic PNECs of 8 PAHs based on predicted and experimental chronic toxicity data was observed (R(2)  = 0.746), and the calculated PNECs ranged from 0.011 µg/L to 205.3 µg/L. A significant log-linear relationship also existed between the octanol-water partition coefficient and PNECs derived from experimental toxicity data (R(2)  = 0.757). A similar order of ecological risks for the 16 PAH species in seawater of Liaodong Bay was found by probabilistic risk quotient and joint probability curve methods. The individual high ecological risk of benzo[a]pyrene, benzo[b]fluoranthene, and benz[a]anthracene needs to be determined. The combined ecological risk of PAHs in seawater of Liaodong Bay calculated by the joint probability curve method was 13.9%, indicating a high risk as a result of co-exposure to PAHs. Environ Toxicol Chem 2016;35:1587-1593. © 2015 SETAC. © 2015 SETAC.

  6. A case cluster of variant Creutzfeldt-Jakob disease linked to the Kingdom of Saudi Arabia.

    PubMed

    Coulthart, Michael B; Geschwind, Michael D; Qureshi, Shireen; Phielipp, Nicolas; Demarsh, Alex; Abrams, Joseph Y; Belay, Ermias; Gambetti, Pierluigi; Jansen, Gerard H; Lang, Anthony E; Schonberger, Lawrence B

    2016-10-01

    As of mid-2016, 231 cases of variant Creutzfeldt-Jakob disease-the human form of a prion disease of cattle, bovine spongiform encephalopathy-have been reported from 12 countries. With few exceptions, the affected individuals had histories of extended residence in the UK or other Western European countries during the period (1980-96) of maximum global risk for human exposure to bovine spongiform encephalopathy. However, the possibility remains that other geographic foci of human infection exist, identification of which may help to foreshadow the future of the epidemic. We report results of a quantitative analysis of country-specific relative risks of infection for three individuals diagnosed with variant Creutzfeldt-Jakob disease in the USA and Canada. All were born and raised in Saudi Arabia, but had histories of residence and travel in other countries. To calculate country-specific relative probabilities of infection, we aligned each patient's life history with published estimates of probability distributions of incubation period and age at infection parameters from a UK cohort of 171 variant Creutzfeldt-Jakob disease cases. The distributions were then partitioned into probability density fractions according to time intervals of the patient's residence and travel history, and the density fractions were combined by country. This calculation was performed for incubation period alone, age at infection alone, and jointly for incubation and age at infection. Country-specific fractions were normalized either to the total density between the individual's dates of birth and symptom onset ('lifetime'), or to that between 1980 and 1996, for a total of six combinations of parameter and interval. The country-specific relative probability of infection for Saudi Arabia clearly ranked highest under each of the six combinations of parameter × interval for Patients 1 and 2, with values ranging from 0.572 to 0.998, respectively, for Patient 2 (age at infection × lifetime) and Patient 1 (joint incubation and age at infection × 1980-96). For Patient 3, relative probabilities for Saudi Arabia were not as distinct from those for other countries using the lifetime interval: 0.394, 0.360 and 0.378, respectively, for incubation period, age at infection and jointly for incubation and age at infection. However, for this patient Saudi Arabia clearly ranked highest within the 1980-96 period: 0.859, 0.871 and 0.865, respectively, for incubation period, age at infection and jointly for incubation and age at infection. These findings support the hypothesis that human infection with bovine spongiform encephalopathy occurred in Saudi Arabia. © Her Majesty the Queen in Right of Canada 2016. Reproduced with the permission of the Minister of Public Health.

  7. Statistical Analysis of Stress Signals from Bridge Monitoring by FBG System

    PubMed Central

    Ye, Xiao-Wei; Xi, Pei-Sen

    2018-01-01

    In this paper, a fiber Bragg grating (FBG)-based stress monitoring system instrumented on an orthotropic steel deck arch bridge is demonstrated. The FBG sensors are installed at two types of critical fatigue-prone welded joints to measure the strain and temperature signals. A total of 64 FBG sensors are deployed around the rib-to-deck and rib-to-diagram areas at the mid-span and quarter-span of the investigated orthotropic steel bridge. The local stress behaviors caused by the highway loading and temperature effect during the construction and operation periods are presented with the aid of a wavelet multi-resolution analysis approach. In addition, the multi-modal characteristic of the rainflow counted stress spectrum is modeled by the method of finite mixture distribution together with a genetic algorithm (GA)-based parameter estimation approach. The optimal probability distribution of the stress spectrum is determined by use of Bayesian information criterion (BIC). Furthermore, the hot spot stress of the welded joint is calculated by an extrapolation method recommended in the specification of International Institute of Welding (IIW). The stochastic characteristic of stress concentration factor (SCF) of the concerned welded joint is addressed. The proposed FBG-based stress monitoring system and probabilistic stress evaluation methods can provide an effective tool for structural monitoring and condition assessment of orthotropic steel bridges. PMID:29414850

  8. An equivalent viscoelastic model for rock mass with parallel joints

    NASA Astrophysics Data System (ADS)

    Li, Jianchun; Ma, Guowei; Zhao, Jian

    2010-03-01

    An equivalent viscoelastic medium model is proposed for rock mass with parallel joints. A concept of "virtual wave source (VWS)" is proposed to take into account the wave reflections between the joints. The equivalent model can be effectively applied to analyze longitudinal wave propagation through discontinuous media with parallel joints. Parameters in the equivalent viscoelastic model are derived analytically based on longitudinal wave propagation across a single rock joint. The proposed model is then verified by applying identical incident waves to the discontinuous and equivalent viscoelastic media at one end to compare the output waves at the other end. When the wavelength of the incident wave is sufficiently long compared to the joint spacing, the effect of the VWS on wave propagation in rock mass is prominent. The results from the equivalent viscoelastic medium model are very similar to those determined from the displacement discontinuity method. Frequency dependence and joint spacing effect on the equivalent viscoelastic model and the VWS method are discussed.

  9. General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models

    USGS Publications Warehouse

    Miller, David A.W.

    2012-01-01

    Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.

  10. Investigation of 2‐stage meta‐analysis methods for joint longitudinal and time‐to‐event data through simulation and real data application

    PubMed Central

    Tudur Smith, Catrin; Gueyffier, François; Kolamunnage‐Dona, Ruwanthi

    2017-01-01

    Background Joint modelling of longitudinal and time‐to‐event data is often preferred over separate longitudinal or time‐to‐event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time‐to‐event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta‐analysis of joint model estimates from multiple studies. Methods We propose a 2‐stage method for meta‐analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta‐analyses of separate longitudinal or time‐to‐event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Results Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta‐analytic setting where association exists between the longitudinal and time‐to‐event outcomes. Conclusions Where evidence of association between longitudinal and time‐to‐event outcomes exists, results from joint models over standalone analyses should be pooled in 2‐stage meta‐analyses. PMID:29250814

  11. Implementation of a gait cycle loading into healthy and meniscectomised knee joint models with fibril-reinforced articular cartilage.

    PubMed

    Mononen, Mika E; Jurvelin, Jukka S; Korhonen, Rami K

    2015-01-01

    Computational models can be used to evaluate the functional properties of knee joints and possible risk locations within joints. Current models with fibril-reinforced cartilage layers do not provide information about realistic human movement during walking. This study aimed to evaluate stresses and strains within a knee joint by implementing load data from a gait cycle in healthy and meniscectomised knee joint models with fibril-reinforced cartilages. A 3D finite element model of a knee joint with cartilages and menisci was created from magnetic resonance images. The gait cycle data from varying joint rotations, translations and axial forces were taken from experimental studies and implemented into the model. Cartilage layers were modelled as a fibril-reinforced poroviscoelastic material with the menisci considered as a transversely isotropic elastic material. In the normal knee joint model, relatively high maximum principal stresses were specifically predicted to occur in the medial condyle of the knee joint during the loading response. Bilateral meniscectomy increased stresses, strains and fluid pressures in cartilage on the lateral side, especially during the first 50% of the stance phase of the gait cycle. During the entire stance phase, the superficial collagen fibrils modulated stresses of cartilage, especially in the medial tibial cartilage. The present computational model with a gait cycle and fibril-reinforced biphasic cartilage revealed time- and location-dependent differences in stresses, strains and fluid pressures occurring in cartilage during walking. The lateral meniscus was observed to have a more significant role in distributing loads across the knee joint than the medial meniscus, suggesting that meniscectomy might initiate a post-traumatic process leading to osteoarthritis at the lateral compartment of the knee joint.

  12. Qualitative and quantitative descriptions of glenohumeral motion.

    PubMed

    Hill, A M; Bull, A M J; Wallace, A L; Johnson, G R

    2008-02-01

    Joint modelling plays an important role in qualitative and quantitative descriptions of both normal and abnormal joints, as well as predicting outcomes of alterations to joints in orthopaedic practice and research. Contemporary efforts in modelling have focussed upon the major articulations of the lower limb. Well-constrained arthrokinematics can form the basis of manageable kinetic and dynamic mathematical predictions. In order to contain computation of shoulder complex modelling, glenohumeral joint representations in both limited and complete shoulder girdle models have undergone a generic simplification. As such, glenohumeral joint models are often based upon kinematic descriptions of inadequate degrees of freedom (DOF) for clinical purposes and applications. Qualitative descriptions of glenohumeral motion range from the parody of a hinge joint to the complex realism of a spatial joint. In developing a model, a clear idea of intention is required in order to achieve a required application. Clinical applicability of a model requires both descriptive and predictive output potentials, and as such, a high level of validation is required. Without sufficient appreciation of the clinical intention of the arthrokinematic foundation to a model, error is all too easily introduced. Mathematical description of joint motion serves to quantify all relevant clinical parameters. Commonly, both the Euler angle and helical (screw) axis methods have been applied to the glenohumeral joint, although concordance between these methods and classical anatomical appreciation of joint motion is limited, resulting in miscommunication between clinician and engineer. Compounding these inconsistencies in motion quantification is gimbal lock and sequence dependency.

  13. Low Activation Joining of SiC/SiC Composites for Fusion Applications: Modeling Miniature Torsion Tests with Elastic and Elastic-Plastic Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henager, Charles H.; Nguyen, Ba Nghiep; Kurtz, Richard J.

    2015-06-30

    The international fusion community designed miniature torsion specimens for joint testing and irradiation in test reactors with limited irradiation volumes since SiC and SiC-composites used in fission or fusion environments require joining methods for assembling systems. Torsion specimens fail out-of-plane when joints are strong and when elastic moduli are comparable to SiC, which causes difficulties in determining shear strengths for many joints or for comparing unirradiated and irradiated joints. A finite element damage model was developed to treat elastic joints such as SiC/Ti3SiC2+SiC and elastic-plastic joints such as SiC/epoxy and steel/epoxy. The model uses constitutive shear data and is validatedmore » using epoxy joint data. The elastic model indicates fracture is likely to occur within the joined pieces to cause out-of-plane failures for miniature torsion specimens when a certain modulus and strength ratio between the joint material and the joined material exists. Lower modulus epoxy joints always fail in plane and provide good model validation.« less

  14. Maximum voluntary joint torque as a function of joint angle and angular velocity: model development and application to the lower limb.

    PubMed

    Anderson, Dennis E; Madigan, Michael L; Nussbaum, Maury A

    2007-01-01

    Measurements of human strength can be important during analyses of physical activities. Such measurements have often taken the form of the maximum voluntary torque at a single joint angle and angular velocity. However, the available strength varies substantially with joint position and velocity. When examining dynamic activities, strength measurements should account for these variations. A model is presented of maximum voluntary joint torque as a function of joint angle and angular velocity. The model is based on well-known physiological relationships between muscle force and length and between muscle force and velocity and was tested by fitting it to maximum voluntary joint torque data from six different exertions in the lower limb. Isometric, concentric and eccentric maximum voluntary contractions were collected during hip extension, hip flexion, knee extension, knee flexion, ankle plantar flexion and dorsiflexion. Model parameters are reported for each of these exertion directions by gender and age group. This model provides an efficient method by which strength variations with joint angle and angular velocity may be incorporated into comparisons between joint torques calculated by inverse dynamics and the maximum available joint torques.

  15. Structural behavior of the space shuttle SRM Tang-Clevis joint

    NASA Technical Reports Server (NTRS)

    Greene, W. H.; Knight, N. F., Jr.; Stockwell, A. E.

    1986-01-01

    The space shuttle Challenger accident investigation focused on the failure of a tang-clevis joint on the right solid rocket motor. The existence of relative motion between the inner arm of the clevis and the O-ring sealing surface on the tang has been identified as a potential contributor to this failure. This motion can cause the O-rings to become unseated and therefore lose their sealing capability. Finite element structural analyses have been performed to predict both deflections and stresses in the joint under the primary, pressure loading condition. These analyses have demonstrated the difficulty of accurately predicting the structural behavior of the tang-clevis joint. Stresses in the vicinity of the connecting pins, obtained from elastic analyses, considerably exceed the material yield allowables indicating that inelastic analyses are probably necessary. Two modifications have been proposed to control the relative motion between the inner clevis arm and the tang at the O-ring sealing surface. One modification, referred to as the capture feature, uses additional material on the inside of the tang to restrict motion of the inner clevis arm. The other modification uses external stiffening rings above and below the joint to control the local bending in the shell near the joint. Both of these modifications are shown to be effective in controlling the relative motion in the joint.

  16. Structural behavior of the space shuttle SRM tang-clevis joint

    NASA Technical Reports Server (NTRS)

    Greene, William H.; Knight, Norman F., Jr.; Stockwell, Alan E.

    1988-01-01

    The space shuttle Challenger accident investigation focused on the failure of a tang-clevis joint on the right solid rocket motor. The existence of relative motion between the inner arm of the clevis and the O-ring sealing surface on the tang has been identified as a potential contributor to this failure. This motion can cause the O-rings to become unseated and therefore lose their sealing capability. Finite element structural analyses have been performed to predict both deflections and stresses in the joint under the primary, pressure loading condition. These analyses have demonstrated the difficulty of accurately predicting the structural behavior of the tang-clevis joint. Stresses in the vicinity of the connecting pins, obtained from elastic analyses, considerably exceed the material yield allowables indicating that inelastic analyses are probably necessary. Two modifications have been proposed to control the relative motion between the inner clevis arm and the tang at the O-ring sealing surface. One modification, referred to as the capture feature, uses additional material on the inside of the tang to restrict motion of the inner clevis arm. The other modification uses external stiffening rings above and below the joint to control the local bending in the shell near the joint. Both of these modifications are shown to be effective in controlling the relative motion in the joint.

  17. Probability distribution and statistical properties of spherically compensated cosmic regions in ΛCDM cosmology

    NASA Astrophysics Data System (ADS)

    Alimi, Jean-Michel; de Fromont, Paul

    2018-04-01

    The statistical properties of cosmic structures are well known to be strong probes for cosmology. In particular, several studies tried to use the cosmic void counting number to obtain tight constrains on dark energy. In this paper, we model the statistical properties of these regions using the CoSphere formalism (de Fromont & Alimi) in both primordial and non-linearly evolved Universe in the standard Λ cold dark matter model. This formalism applies similarly for minima (voids) and maxima (such as DM haloes), which are here considered symmetrically. We first derive the full joint Gaussian distribution of CoSphere's parameters in the Gaussian random field. We recover the results of Bardeen et al. only in the limit where the compensation radius becomes very large, i.e. when the central extremum decouples from its cosmic environment. We compute the probability distribution of the compensation size in this primordial field. We show that this distribution is redshift independent and can be used to model cosmic voids size distribution. We also derive the statistical distribution of the peak parameters introduced by Bardeen et al. and discuss their correlation with the cosmic environment. We show that small central extrema with low density are associated with narrow compensation regions with deep compensation density, while higher central extrema are preferentially located in larger but smoother over/under massive regions.

  18. Characterization of the spatial variability of channel morphology

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    2002-01-01

    The spatial variability of two fundamental morphological variables is investigated for rivers having a wide range of discharge (five orders of magnitude). The variables, water-surface width and average depth, were measured at 58 to 888 equally spaced cross-sections in channel links (river reaches between major tributaries). These measurements provide data to characterize the two-dimensional structure of a channel link which is the fundamental unit of a channel network. The morphological variables have nearly log-normal probability distributions. A general relation was determined which relates the means of the log-transformed variables to the logarithm of discharge similar to previously published downstream hydraulic geometry relations. The spatial variability of the variables is described by two properties: (1) the coefficient of variation which was nearly constant (0.13-0.42) over a wide range of discharge; and (2) the integral length scale in the downstream direction which was approximately equal to one to two mean channel widths. The joint probability distribution of the morphological variables in the downstream direction was modelled as a first-order, bivariate autoregressive process. This model accounted for up to 76 per cent of the total variance. The two-dimensional morphological variables can be scaled such that the channel width-depth process is independent of discharge. The scaling properties will be valuable to modellers of both basin and channel dynamics. Published in 2002 John Wiley and Sons, Ltd.

  19. The influence of coordinated defects on inhomogeneous broadening in cubic lattices

    NASA Astrophysics Data System (ADS)

    Matheson, P. L.; Sullivan, Francis P.; Evenson, William E.

    2016-12-01

    The joint probability distribution function (JPDF) of electric field gradient (EFG) tensor components in cubic materials is dominated by coordinated pairings of defects in shells near probe nuclei. The contributions from these inner shell combinations and their surrounding structures contain the essential physics that determine the PAC-relevant quantities derived from them. The JPDF can be used to predict the nature of inhomogeneous broadening (IHB) in perturbed angular correlation (PAC) experiments by modeling the G 2 spectrum and finding expectation values for V zz and η. The ease with which this can be done depends upon the representation of the JPDF. Expanding on an earlier work by Czjzek et al. (Hyperfine Interact. 14, 189-194, 1983), Evenson et al. (Hyperfine Interact. 237, 119, 2016) provide a set of coordinates constructed from the EFG tensor invariants they named W 1 and W 2. Using this parameterization, the JPDF in cubic structures was constructed using a point charge model in which a single trapped defect (TD) is the nearest neighbor to a probe nucleus. Individual defects on nearby lattice sites pair with the TD to provide a locus of points in the W 1- W 2 plane around which an amorphous-like distribution of probability density grows. Interestingly, however, marginal, separable PDFs appear adequate to model IHB relevant cases. We present cases from simulations in cubic materials illustrating the importance of these near-shell coordinations.

  20. Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

    NASA Astrophysics Data System (ADS)

    Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

    2012-05-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

  1. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  2. Efficient finite element modelling for the investigation of the dynamic behaviour of a structure with bolted joints

    NASA Astrophysics Data System (ADS)

    Omar, R.; Rani, M. N. Abdul; Yunus, M. A.; Mirza, W. I. I. Wan Iskandar; Zin, M. S. Mohd

    2018-04-01

    A simple structure with bolted joints consists of the structural components, bolts and nuts. There are several methods to model the structures with bolted joints, however there is no reliable, efficient and economic modelling methods that can accurately predict its dynamics behaviour. Explained in this paper is an investigation that was conducted to obtain an appropriate modelling method for bolted joints. This was carried out by evaluating four different finite element (FE) models of the assembled plates and bolts namely the solid plates-bolts model, plates without bolt model, hybrid plates-bolts model and simplified plates-bolts model. FE modal analysis was conducted for all four initial FE models of the bolted joints. Results of the FE modal analysis were compared with the experimental modal analysis (EMA) results. EMA was performed to extract the natural frequencies and mode shapes of the test physical structure with bolted joints. Evaluation was made by comparing the number of nodes, number of elements, elapsed computer processing unit (CPU) time, and the total percentage of errors of each initial FE model when compared with EMA result. The evaluation showed that the simplified plates-bolts model could most accurately predict the dynamic behaviour of the structure with bolted joints. This study proved that the reliable, efficient and economic modelling of bolted joints, mainly the representation of the bolting, has played a crucial element in ensuring the accuracy of the dynamic behaviour prediction.

  3. Dynamic Analyses Including Joints Of Truss Structures

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith

    1991-01-01

    Method for mathematically modeling joints to assess influences of joints on dynamic response of truss structures developed in study. Only structures with low-frequency oscillations considered; only Coulomb friction and viscous damping included in analysis. Focus of effort to obtain finite-element mathematical models of joints exhibiting load-vs.-deflection behavior similar to measured load-vs.-deflection behavior of real joints. Experiments performed to determine stiffness and damping nonlinearities typical of joint hardware. Algorithm for computing coefficients of analytical joint models based on test data developed to enable study of linear and nonlinear effects of joints on global structural response. Besides intended application to large space structures, applications in nonaerospace community include ground-based antennas and earthquake-resistant steel-framed buildings.

  4. Association between disk position and degenerative bone changes of the temporomandibular joints: an imaging study in subjects with TMD.

    PubMed

    Cortés, Daniel; Sylvester, Daniel Cortés; Exss, Eduardo; Marholz, Carlos; Millas, Rodrigo; Moncada, Gustavo

    2011-04-01

    The aim of this study was to determine the frequency and relationship between disk position and degenerative bone changes in the temporomandibular joints (TMJ), in subjects with internal derangement (ID). MRI and CT scans of 180 subjects with temporomandibular disorders (TMD) were studied. Different image parameters or characteristics were observed, such as disk position, joint effusion, condyle movement, degenerative bone changes (flattened, cortical erosions and irregularities), osteophytes, subchondral cysts and idiopathic condyle resorption. The present study concluded that there is a significant association between disk displacement without reduction and degenerative bone changes in patients with TMD. The study also found a high probability of degenerative bone changes when disk displacement without reduction is present. No association was found between TMD and condyle range of motion, joint effusion and/or degenerative bone changes. The following were the most frequent morphological changes observed: flattening of the anterior surface of the condyle; followed by erosions and irregularities of the joint surfaces; flattening of the articular surface of the temporal eminence, subchondral cysts, osteophytes; and idiopathic condyle resorption, in decreasing order.

  5. Diffusion welding of MA 6000 and a conventional nickel-base superalloy

    NASA Technical Reports Server (NTRS)

    Moore, T. J.; Glasgow, T. K.

    1985-01-01

    A feasibility study of diffusion welding the oxide dispersion strengthened (ODS) alloy MA 6000 to itself and to conventional Ni-base superalloy Udimet 700 was conducted. Butt joints between MA 6000 pieces and lap joints between Udimet 700 and the ODS alloy were produced by hot pressing for 1.25 hr at temperatures ranging from 1000 to 1200 C (1832-2192 F) in vacuum. Following pressing, all weldments were heat treated and machined into mechanical property test specimens. While three different combinations of recrystallized and unrecrystallized MA 6000 butt joints were produced, the unrecrystallized to unrecrystallized joint was most successful as determined by mechanical properties and microstructural examination. Failure to weld the recrystallized material probably related to a lack of adequate deformation at the weld interface. While recrystallized MA 6000 could be diffusion welded to Udimet 700 in places, complete welding over the entire lap joint was not achieved, again due to the lack of sufficient deformation at the faying surfaces. Several methods are proposed to promote the intimate contact necessary for diffusion welding MA 6000 to itself and to superalloys.

  6. Analytical and Experimental Assessment of Seismic Vulnerability of Beam-Column Joints without Transverse Reinforcement in Concrete Buildings

    NASA Astrophysics Data System (ADS)

    Hassan, Wael Mohammed

    Beam-column joints in concrete buildings are key components to ensure structural integrity of building performance under seismic loading. Earthquake reconnaissance has reported the substantial damage that can result from inadequate beam-column joints. In some cases, failure of older-type corner joints appears to have led to building collapse. Since the 1960s, many advances have been made to improve seismic performance of building components, including beam-column joints. New design and detailing approaches are expected to produce new construction that will perform satisfactorily during strong earthquake shaking. Much less attention has been focused on beam-column joints of older construction that may be seismically vulnerable. Concrete buildings constructed prior to developing details for ductility in the 1970s normally lack joint transverse reinforcement. The available literature concerning the performance of such joints is relatively limited, but concerns about performance exist. The current study aimed to improve understanding and assessment of seismic performance of unconfined exterior and corner beam-column joints in existing buildings. An extensive literature survey was performed, leading to development of a database of about a hundred tests. Study of the data enabled identification of the most important parameters and the effect of each parameter on the seismic performance. The available analytical models and guidelines for strength and deformability assessment of unconfined joints were surveyed and evaluated. In particular, The ASCE 41 existing building document proved to be substantially conservative in joint shear strength estimation. Upon identifying deficiencies in these models, two new joint shear strength models, a bond capacity model, and two axial capacity models designed and tailored specifically for unconfined beam-column joints were developed. The proposed models strongly correlated with previous test results. In the laboratory testing phase of the current study, four full-scale corner beam-column joint subassemblies, with slab included, were designed, built, instrumented, tested, and analyzed. The specimens were tested under unidirectional and bidirectional displacement-controlled quasi-static loading that incorporated varying axial loads that simulated overturning seismic moment effects. The axial loads varied between tension and high compression loads reaching about 50% of the column axial capacity. The test parameters were axial load level, loading history, joint aspect ratio, and beam reinforcement ratio. The test results proved that high axial load increases joint shear strength and decreases the deformability of joints failing in pure shear failure mode without beam yielding. On the contrary, high axial load did not affect the strength of joints failing in shear after significant beam yielding; however, it substantially increased their displacement ductility. Joint aspect ratio proved to be instrumental in deciding joint shear strength; that is the deeper the joint the lower the shear strength. Bidirectional loading reduced the apparent strength of the joint in the uniaxial principal axes. However, circular shear strength interaction is an appropriate approximation to predict the biaxial strength. The developed shear strength models predicted successfully the strength of test specimens. Based on the literature database investigation, the shear and axial capacity models developed and the test results of the current study, an analytical finite element component model based on a proposed joint shear stress-rotation backbone constitutive curve was developed to represent the behavior of unconfined beam-column joints in computer numerical simulations of concrete frame buildings. The proposed finite element model included the effect of axial load, mode of joint failure, joint aspect ratio and axial capacity of joint. The proposed backbone curve along with the developed joint element exhibited high accuracy in simulating the test response of the current test specimens as well as previous test joints. Finally, a parametric study was conducted to assess the axial failure vulnerability of unconfined beam-column joints based on the developed shear and axial capacity models. This parametric study compared the axial failure potential of unconfined beam-column joint with that of shear critical columns to provide a preliminary insight into the axial collapse vulnerability of older-type buildings during intense ground shaking.

  7. Joint Control for Dummies: An Elaboration of Lowenkron's Model of Joint (Stimulus) Control

    ERIC Educational Resources Information Center

    Sidener, David W.

    2006-01-01

    The following paper describes Lowenkron's model of joint (stimulus) control. Joint control is described as a means of accounting for performances, especially generalized performances, for which a history of contingency control does not provide an adequate account. Examples are provided to illustrate instances in which joint control may facilitate…

  8. The evolution and consequences of sex-specific reproductive variance.

    PubMed

    Mullon, Charles; Reuter, Max; Lehmann, Laurent

    2014-01-01

    Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.

  9. The Evolution and Consequences of Sex-Specific Reproductive Variance

    PubMed Central

    Mullon, Charles; Reuter, Max; Lehmann, Laurent

    2014-01-01

    Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction. PMID:24172130

  10. Making Peace Pay: Post-Conflict Economic and Infrastructure Development in Kosovo and Iraq

    DTIC Science & Technology

    probability that a post-conflict state will return to war. Additionally, consecutive presidential administrations and joint doctrine have declared...and Iraqi Freedom as historical case studies to demonstrate that the armed forces possess unique advantages, to include physical presence and

  11. Evaluating carbon storage, timber harvest, and habitat possibilities for a Western Cascades (USA) forest landscape.

    PubMed

    Kline, Jeffrey D; Harmon, Mark E; Spies, Thomas A; Morzillo, Anita T; Pabst, Robert J; McComb, Brenda C; Schnekenburger, Frank; Olsen, Keith A; Csuti, Blair; Vogeler, Jody C

    2016-10-01

    Forest policymakers and managers have long sought ways to evaluate the capability of forest landscapes to jointly produce timber, habitat, and other ecosystem services in response to forest management. Currently, carbon is of particular interest as policies for increasing carbon storage on federal lands are being proposed. However, a challenge in joint production analysis of forest management is adequately representing ecological conditions and processes that influence joint production relationships. We used simulation models of vegetation structure, forest sector carbon, and potential wildlife habitat to characterize landscape-level joint production possibilities for carbon storage, timber harvest, and habitat for seven wildlife species across a range of forest management regimes. We sought to (1) characterize the general relationships of production possibilities for combinations of carbon storage, timber, and habitat, and (2) identify management variables that most influence joint production relationships. Our 160 000-ha study landscape featured environmental conditions typical of forests in the Western Cascade Mountains of Oregon (USA). Our results indicate that managing forests for carbon storage involves trade-offs among timber harvest and habitat for focal wildlife species, depending on the disturbance interval and utilization intensity followed. Joint production possibilities for wildlife species varied in shape, ranging from competitive to complementary to compound, reflecting niche breadth and habitat component needs of species examined. Managing Pacific Northwest forests to store forest sector carbon can be roughly complementary with habitat for Northern Spotted Owl, Olive-sided Flycatcher, and red tree vole. However, managing forests to increase carbon storage potentially can be competitive with timber production and habitat for Pacific marten, Pileated Woodpecker, and Western Bluebird, depending on the disturbance interval and harvest intensity chosen. Our analysis suggests that joint production possibilities under forest management regimes currently typical on industrial forest lands (e.g., 40- to 80-yr rotations with some tree retention for wildlife) represent but a small fraction of joint production outcomes possible in the region. Although the theoretical boundaries of the production possibilities sets we developed are probably unachievable in the current management environment, they arguably define the long-term potential of managing forests to produce multiple ecosystem services within and across multiple forest ownerships. © 2016 by the Ecological Society of America.

  12. Joint detection and tracking of size-varying infrared targets based on block-wise sparse decomposition

    NASA Astrophysics Data System (ADS)

    Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu

    2016-05-01

    The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.

  13. The emergence of different tail exponents in the distributions of firm size variables

    NASA Astrophysics Data System (ADS)

    Ishikawa, Atushi; Fujimoto, Shouji; Watanabe, Tsutomu; Mizuno, Takayuki

    2013-05-01

    We discuss a mechanism through which inversion symmetry (i.e., invariance of a joint probability density function under the exchange of variables) and Gibrat’s law generate power-law distributions with different tail exponents. Using a dataset of firm size variables, that is, tangible fixed assets K, the number of workers L, and sales Y, we confirm that these variables have power-law tails with different exponents, and that inversion symmetry and Gibrat’s law hold. Based on these findings, we argue that there exists a plane in the three dimensional space (logK,logL,logY), with respect to which the joint probability density function for the three variables is invariant under the exchange of variables. We provide empirical evidence suggesting that this plane fits the data well, and argue that the plane can be interpreted as the Cobb-Douglas production function, which has been extensively used in various areas of economics since it was first introduced almost a century ago.

  14. Breakdown of the classical description of a local system.

    PubMed

    Kot, Eran; Grønbech-Jensen, Niels; Nielsen, Bo M; Neergaard-Nielsen, Jonas S; Polzik, Eugene S; Sørensen, Anders S

    2012-06-08

    We provide a straightforward demonstration of a fundamental difference between classical and quantum mechanics for a single local system: namely, the absence of a joint probability distribution of the position x and momentum p. Elaborating on a recently reported criterion by Bednorz and Belzig [Phys. Rev. A 83, 052113 (2011)] we derive a simple criterion that must be fulfilled for any joint probability distribution in classical physics. We demonstrate the violation of this criterion using the homodyne measurement of a single photon state, thus proving a straightforward signature of the breakdown of a classical description of the underlying state. Most importantly, the criterion used does not rely on quantum mechanics and can thus be used to demonstrate nonclassicality of systems not immediately apparent to exhibit quantum behavior. The criterion is directly applicable to any system described by the continuous canonical variables x and p, such as a mechanical or an electrical oscillator and a collective spin of a large ensemble.

  15. Outcome and predicting factors of single and multiple intra-articular corticosteroid injections in children with juvenile idiopathic arthritis.

    PubMed

    Lanni, Stefano; Bertamino, Marta; Consolaro, Alessandro; Pistorio, Angela; Magni-Manzoni, Silvia; Galasso, Roberta; Lattanzi, Bianca; Calvo-Aranda, Enrique; Martini, Alberto; Ravelli, Angelo

    2011-09-01

    To investigate the efficacy of IA CS (IAC) therapy in single and multiple joints in children with JIA and to seek for predictors of synovitis flare. The clinical charts of patients who received their first IAC injection between January 2002 and December 2008 were reviewed. The CS used was triamcinolone hexacetonide for large joints and methylprednisolone acetate for small or difficult to access joints. Patients were stratified as follows: one joint injected; two joints injected; and three or more joints injected. Predictors included sex, age at disease onset, JIA category, age and disease duration, ANA status, iridocyclitis, general anaesthesia, number and type of injected joints, acute-phase reactants and concomitant MTX therapy. The cumulative probability of survival without synovitis flare for patients injected in one, two, or three or more joints was 70, 45 and 44%, respectively, at 1 year; 61, 32 and 30%, respectively, at 2 years; and 37, 22 and 19%, respectively, at 3 years. On Cox regression analysis, positive CRP, negative ANA and injection in the ankle were the strongest predictors for synovitis flare. The only significant side effect was skin hypopigmentation or s.c. atrophy, which occurred in <2% of patients. IAC therapy-induced sustained remission of synovitis in a substantial proportion of patients injected either in single or multiple joints, with a good safety profile. The risk of synovitis flare was higher in patients who had positive CRP, negative ANA and were injected in the ankle.

  16. Investigating compound flooding in an estuary using hydrodynamic modelling: a case study from the Shoalhaven River, Australia

    NASA Astrophysics Data System (ADS)

    Kumbier, Kristian; Carvalho, Rafael C.; Vafeidis, Athanasios T.; Woodroffe, Colin D.

    2018-02-01

    Many previous modelling studies have considered storm-tide and riverine flooding independently, even though joint-probability analysis highlighted significant dependence between extreme rainfall and extreme storm surges in estuarine environments. This study investigates compound flooding by quantifying horizontal and vertical differences in coastal flood risk estimates resulting from a separation of storm-tide and riverine flooding processes. We used an open-source version of the Delft3D model to simulate flood extent and inundation depth due to a storm event that occurred in June 2016 in the Shoalhaven Estuary, south-eastern Australia. Time series of observed water levels and discharge measurements are used to force model boundaries, whereas observational data such as satellite imagery, aerial photographs, tidal gauges and water level logger measurements are used to validate modelling results. The comparison of simulation results including and excluding riverine discharge demonstrated large differences in modelled flood extents and inundation depths. A flood risk assessment accounting only for storm-tide flooding would have underestimated the flood extent of the June 2016 storm event by 30 % (20.5 km2). Furthermore, inundation depths would have been underestimated on average by 0.34 m and by up to 1.5 m locally. We recommend considering storm-tide and riverine flooding processes jointly in estuaries with large catchment areas, which are known to have a quick response time to extreme rainfall. In addition, comparison of different boundary set-ups at the intermittent entrance in Shoalhaven Heads indicated that a permanent opening, in order to reduce exposure to riverine flooding, would increase tidal range and exposure to both storm-tide flooding and wave action.

  17. A New Multivariate Approach in Generating Ensemble Meteorological Forcings for Hydrological Forecasting

    NASA Astrophysics Data System (ADS)

    Khajehei, Sepideh; Moradkhani, Hamid

    2015-04-01

    Producing reliable and accurate hydrologic ensemble forecasts are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model structure, and model parameters. Producing reliable and skillful precipitation ensemble forecasts is one approach to reduce the total uncertainty in hydrological applications. Currently, National Weather Prediction (NWP) models are developing ensemble forecasts for various temporal ranges. It is proven that raw products from NWP models are biased in mean and spread. Given the above state, there is a need for methods that are able to generate reliable ensemble forecasts for hydrological applications. One of the common techniques is to apply statistical procedures in order to generate ensemble forecast from NWP-generated single-value forecasts. The procedure is based on the bivariate probability distribution between the observation and single-value precipitation forecast. However, one of the assumptions of the current method is fitting Gaussian distribution to the marginal distributions of observed and modeled climate variable. Here, we have described and evaluated a Bayesian approach based on Copula functions to develop an ensemble precipitation forecast from the conditional distribution of single-value precipitation forecasts. Copula functions are known as the multivariate joint distribution of univariate marginal distributions, which are presented as an alternative procedure in capturing the uncertainties related to meteorological forcing. Copulas are capable of modeling the joint distribution of two variables with any level of correlation and dependency. This study is conducted over a sub-basin in the Columbia River Basin in USA using the monthly precipitation forecasts from Climate Forecast System (CFS) with 0.5x0.5 Deg. spatial resolution to reproduce the observations. The verification is conducted on a different period and the superiority of the procedure is compared with Ensemble Pre-Processor approach currently used by National Weather Service River Forecast Centers in USA.

  18. Multiple Damage Progression Paths in Model-Based Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Goebel, Kai Frank

    2011-01-01

    Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active

  19. A bidimensional finite mixture model for longitudinal data subject to dropout.

    PubMed

    Spagnoli, Alessandra; Marino, Maria Francesca; Alfò, Marco

    2018-06-05

    In longitudinal studies, subjects may be lost to follow up and, thus, present incomplete response sequences. When the mechanism underlying the dropout is nonignorable, we need to account for dependence between the longitudinal and the dropout process. We propose to model such a dependence through discrete latent effects, which are outcome-specific and account for heterogeneity in the univariate profiles. Dependence between profiles is introduced by using a probability matrix to describe the corresponding joint distribution. In this way, we separately model dependence within each outcome and dependence between outcomes. The major feature of this proposal, when compared with standard finite mixture models, is that it allows the nonignorable dropout model to properly nest its ignorable counterpart. We also discuss the use of an index of (local) sensitivity to nonignorability to investigate the effects that assumptions about the dropout process may have on model parameter estimates. The proposal is illustrated via the analysis of data from a longitudinal study on the dynamics of cognitive functioning in the elderly. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Histopathology of chondronecrosis development in knee articular cartilage in a rat model of Kashin-Beck disease using T-2 toxin and selenium deficiency conditions.

    PubMed

    Guan, Fang; Li, Siyuan; Wang, Zhi-Lun; Yang, Haojie; Xue, Senghai; Wang, Wei; Song, Daiqing; Zhou, Xiaorong; Zhou, Wang; Chen, Jing-Hong; Caterson, Bruce; Hughes, Clare

    2013-01-01

    The objective of this study is to observe pathogenic lesions of joint cartilages in rats fed with T-2 toxin under a selenium deficiency nutrition status in order to determine possible etiological factors causing Kashin-Beck disease (KBD). Sprague-Dawley rats were fed selenium-deficient or control diets for 4 weeks prior to their being exposed to T-2 toxin. Six dietary groups were formed and studied 4 weeks later, i.e., controls, selenium-deficient, low T-2 toxin, high T-2 toxin, selenium-deficient diet plus low T-2 toxin, and selenium-deficient diet plus high T-2 toxin. Selenium deficiencies were confirmed by the determination of glutathione peroxidase activity and selenium levels in serum. The morphology and pathology (chondronecrosis) of knee joint cartilage of experimental rats were observed using light microscopy and the expression of proteoglycans was determined by histochemical staining. Chondronecrosis in deep zone of articular cartilage of knee joints was seen in both the low and high T-2 toxin plus selenium-deficient diet groups, these chondronecrotic lesions being very similar to chondronecrosis observed in human KBD. However, the chondronecrosis observed in the rat epiphyseal growth plates of animals treated with T-2 toxin alone or T-2 toxin plus selenium-deficient diets were not similar to that found in human KBD. Our results indicate that the rat can be used as a suitable animal model for studying etiological factors contributing to the pathogenesis (chondronecrosis) observed in human KBD. However, those changes seen in epiphyseal growth plate differ from those seen in human KBD probably because of the absence of growth plate closure in the rat.

Top