Sample records for joint probability model

  1. Joint probabilities and quantum cognition

    NASA Astrophysics Data System (ADS)

    de Barros, J. Acacio

    2012-12-01

    In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

  2. Excluding joint probabilities from quantum theory

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Danageozian, Arshag

    2018-03-01

    Quantum theory does not provide a unique definition for the joint probability of two noncommuting observables, which is the next important question after the Born's probability for a single observable. Instead, various definitions were suggested, e.g., via quasiprobabilities or via hidden-variable theories. After reviewing open issues of the joint probability, we relate it to quantum imprecise probabilities, which are noncontextual and are consistent with all constraints expected from a quantum probability. We study two noncommuting observables in a two-dimensional Hilbert space and show that there is no precise joint probability that applies for any quantum state and is consistent with imprecise probabilities. This contrasts with theorems by Bell and Kochen-Specker that exclude joint probabilities for more than two noncommuting observables, in Hilbert space with dimension larger than two. If measurement contexts are included into the definition, joint probabilities are not excluded anymore, but they are still constrained by imprecise probabilities.

  3. Idealized models of the joint probability distribution of wind speeds

    NASA Astrophysics Data System (ADS)

    Monahan, Adam H.

    2018-05-01

    The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.

  4. High-resolution urban flood modelling - a joint probability approach

    NASA Astrophysics Data System (ADS)

    Hartnett, Michael; Olbert, Agnieszka; Nash, Stephen

    2017-04-01

    (Divoky et al., 2005). Nevertheless, such events occur and in Ireland alone there are several cases of serious damage due to flooding resulting from a combination of high sea water levels and river flows driven by the same meteorological conditions (e.g. Olbert et al. 2015). A November 2009 fluvial-coastal flooding of Cork City bringing €100m loss was one such incident. This event was used by Olbert et al. (2015) to determine processes controlling urban flooding and is further explored in this study to elaborate on coastal and fluvial flood mechanisms and their roles in controlling water levels. The objective of this research is to develop a methodology to assess combined effect of multiple source flooding on flood probability and severity in urban areas and to establish a set of conditions that dictate urban flooding due to extreme climatic events. These conditions broadly combine physical flood drivers (such as coastal and fluvial processes), their mechanisms and thresholds defining flood severity. The two main physical processes controlling urban flooding: high sea water levels (coastal flooding) and high river flows (fluvial flooding), and their threshold values for which flood is likely to occur, are considered in this study. Contribution of coastal and fluvial drivers to flooding and their impacts are assessed in a two-step process. The first step involves frequency analysis and extreme value statistical modelling of storm surges, tides and river flows and ultimately the application of joint probability method to estimate joint exceedence return periods for combination of surges, tide and river flows. In the second step, a numerical model of Cork Harbour MSN_Flood comprising a cascade of four nested high-resolution models is used to perform simulation of flood inundation under numerous hypothetical coastal and fluvial flood scenarios. The risk of flooding is quantified based on a range of physical aspects such as the extent and depth of inundation (Apel et al

  5. A Bayesian joint probability modeling approach for seasonal forecasting of streamflows at multiple sites

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.

    2009-05-01

    Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.

  6. Comment on "constructing quantum games from nonfactorizable joint probabilities".

    PubMed

    Frąckiewicz, Piotr

    2013-09-01

    In the paper [Phys. Rev. E 76, 061122 (2007)], the authors presented a way of playing 2 × 2 games so that players were able to exploit nonfactorizable joint probabilities respecting the nonsignaling principle (i.e., relativistic causality). We are going to prove, however, that the scheme does not generalize the games studied in the commented paper. Moreover, it allows the players to obtain nonclassical results even if the factorizable joint probabilities are used.

  7. The return period analysis of natural disasters with statistical modeling of bivariate joint probability distribution.

    PubMed

    Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng

    2013-01-01

    New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis. © 2012 Society for Risk Analysis.

  8. Bivariate categorical data analysis using normal linear conditional multinomial probability model.

    PubMed

    Sun, Bingrui; Sutradhar, Brajendra

    2015-02-10

    Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Lamb wave-based damage quantification and probability of detection modeling for fatigue life assessment of riveted lap joint

    NASA Astrophysics Data System (ADS)

    He, Jingjing; Wang, Dengjiang; Zhang, Weifang

    2015-03-01

    This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.

  10. Joint Probability Analysis of Extreme Precipitation and Storm Tide in a Coastal City under Changing Environment

    PubMed Central

    Xu, Kui; Ma, Chao; Lian, Jijian; Bin, Lingling

    2014-01-01

    Catastrophic flooding resulting from extreme meteorological events has occurred more frequently and drawn great attention in recent years in China. In coastal areas, extreme precipitation and storm tide are both inducing factors of flooding and therefore their joint probability would be critical to determine the flooding risk. The impact of storm tide or changing environment on flooding is ignored or underestimated in the design of drainage systems of today in coastal areas in China. This paper investigates the joint probability of extreme precipitation and storm tide and its change using copula-based models in Fuzhou City. The change point at the year of 1984 detected by Mann-Kendall and Pettitt’s tests divides the extreme precipitation series into two subsequences. For each subsequence the probability of the joint behavior of extreme precipitation and storm tide is estimated by the optimal copula. Results show that the joint probability has increased by more than 300% on average after 1984 (α = 0.05). The design joint return period (RP) of extreme precipitation and storm tide is estimated to propose a design standard for future flooding preparedness. For a combination of extreme precipitation and storm tide, the design joint RP has become smaller than before. It implies that flooding would happen more often after 1984, which corresponds with the observation. The study would facilitate understanding the change of flood risk and proposing the adaption measures for coastal areas under a changing environment. PMID:25310006

  11. Joint probability analysis of extreme precipitation and storm tide in a coastal city under changing environment.

    PubMed

    Xu, Kui; Ma, Chao; Lian, Jijian; Bin, Lingling

    2014-01-01

    Catastrophic flooding resulting from extreme meteorological events has occurred more frequently and drawn great attention in recent years in China. In coastal areas, extreme precipitation and storm tide are both inducing factors of flooding and therefore their joint probability would be critical to determine the flooding risk. The impact of storm tide or changing environment on flooding is ignored or underestimated in the design of drainage systems of today in coastal areas in China. This paper investigates the joint probability of extreme precipitation and storm tide and its change using copula-based models in Fuzhou City. The change point at the year of 1984 detected by Mann-Kendall and Pettitt's tests divides the extreme precipitation series into two subsequences. For each subsequence the probability of the joint behavior of extreme precipitation and storm tide is estimated by the optimal copula. Results show that the joint probability has increased by more than 300% on average after 1984 (α = 0.05). The design joint return period (RP) of extreme precipitation and storm tide is estimated to propose a design standard for future flooding preparedness. For a combination of extreme precipitation and storm tide, the design joint RP has become smaller than before. It implies that flooding would happen more often after 1984, which corresponds with the observation. The study would facilitate understanding the change of flood risk and proposing the adaption measures for coastal areas under a changing environment.

  12. Evaluation of joint probability density function models for turbulent nonpremixed combustion with complex chemistry

    NASA Technical Reports Server (NTRS)

    Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.

    1996-01-01

    Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.

  13. Copula Models for Sociology: Measures of Dependence and Probabilities for Joint Distributions

    ERIC Educational Resources Information Center

    Vuolo, Mike

    2017-01-01

    Often in sociology, researchers are confronted with nonnormal variables whose joint distribution they wish to explore. Yet, assumptions of common measures of dependence can fail or estimating such dependence is computationally intensive. This article presents the copula method for modeling the joint distribution of two random variables, including…

  14. Joint coverage probability in a simulation study on Continuous-Time Markov Chain parameter estimation.

    PubMed

    Benoit, Julia S; Chan, Wenyaw; Doody, Rachelle S

    2015-01-01

    Parameter dependency within data sets in simulation studies is common, especially in models such as Continuous-Time Markov Chains (CTMC). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: 1) to develop a multivariate approach for assessing accuracy and precision for simulation studies 2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.

  15. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    PubMed

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  16. Joint modeling of longitudinal data and discrete-time survival outcome.

    PubMed

    Qiu, Feiyou; Stein, Catherine M; Elston, Robert C

    2016-08-01

    A predictive joint shared parameter model is proposed for discrete time-to-event and longitudinal data. A discrete survival model with frailty and a generalized linear mixed model for the longitudinal data are joined to predict the probability of events. This joint model focuses on predicting discrete time-to-event outcome, taking advantage of repeated measurements. We show that the probability of an event in a time window can be more precisely predicted by incorporating the longitudinal measurements. The model was investigated by comparison with a two-step model and a discrete-time survival model. Results from both a study on the occurrence of tuberculosis and simulated data show that the joint model is superior to the other models in discrimination ability, especially as the latent variables related to both survival times and the longitudinal measurements depart from 0. © The Author(s) 2013.

  17. Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.

    PubMed

    Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi

    2015-10-01

    In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. © 2015 Society for Risk Analysis.

  18. Factors related to the joint probability of flooding on paired streams

    USGS Publications Warehouse

    Koltun, G.F.; Sherwood, J.M.

    1998-01-01

    The factors related to the joint probabilty of flooding on paired streams were investigated and quantified to provide information to aid in the design of hydraulic structures where the joint probabilty of flooding is an element of the design criteria. Stream pairs were considered to have flooded jointly at the design-year flood threshold (corresponding to the 2-, 10-, 25-, or 50-year instantaneous peak streamflow) if peak streamflows at both streams in the pair were observed or predicted to have equaled or exceeded the threshold on a given calendar day. Daily mean streamflow data were used as a substitute for instantaneous peak streamflow data to determine which flood thresholds were equaled or exceeded on any given day. Instantaneous peak streamflow data, when available, were used preferentially to assess flood-threshold exceedance. Daily mean streamflow data for each stream were paired with concurrent daily mean streamflow data at the other streams. Observed probabilities of joint flooding, determined for the 2-, 10-, 25-, and 50-year flood thresholds, were computed as the ratios of the total number of days when streamflows at both streams concurrently equaled or exceeded their flood thresholds (events) to the total number of days where streamflows at either stream equaled or exceeded its flood threshold (trials). A combination of correlation analyses, graphical analyses, and logistic-regression analyses were used to identify and quantify factors associated with the observed probabilities of joint flooding (event-trial ratios). The analyses indicated that the distance between drainage area centroids, the ratio of the smaller to larger drainage area, the mean drainage area, and the centroid angle adjusted 30 degrees were the basin characteristics most closely associated with the joint probabilty of flooding on paired streams in Ohio. In general, the analyses indicated that the joint probabilty of flooding decreases with an increase in centroid distance and

  19. Height probabilities in the Abelian sandpile model on the generalized finite Bethe lattice

    NASA Astrophysics Data System (ADS)

    Chen, Haiyan; Zhang, Fuji

    2013-08-01

    In this paper, we study the sandpile model on the generalized finite Bethe lattice with a particular boundary condition. Using a combinatorial method, we give the exact expressions for all single-site probabilities and some two-site joint probabilities. As a by-product, we prove that the height probabilities of bulk vertices are all the same for the Bethe lattice with certain given boundary condition, which was found from numerical evidence by Grassberger and Manna ["Some more sandpiles," J. Phys. (France) 51, 1077-1098 (1990)], 10.1051/jphys:0199000510110107700 but without a proof.

  20. Color object detection using spatial-color joint probability functions.

    PubMed

    Luo, Jiebo; Crandall, David

    2006-06-01

    Object detection in unconstrained images is an important image understanding problem with many potential applications. There has been little success in creating a single algorithm that can detect arbitrary objects in unconstrained images; instead, algorithms typically must be customized for each specific object. Consequently, it typically requires a large number of exemplars (for rigid objects) or a large amount of human intuition (for nonrigid objects) to develop a robust algorithm. We present a robust algorithm designed to detect a class of compound color objects given a single model image. A compound color object is defined as having a set of multiple, particular colors arranged spatially in a particular way, including flags, logos, cartoon characters, people in uniforms, etc. Our approach is based on a particular type of spatial-color joint probability function called the color edge co-occurrence histogram. In addition, our algorithm employs perceptual color naming to handle color variation, and prescreening to limit the search scope (i.e., size and location) for the object. Experimental results demonstrated that the proposed algorithm is insensitive to object rotation, scaling, partial occlusion, and folding, outperforming a closely related algorithm based on color co-occurrence histograms by a decisive margin.

  1. Naive Probability: Model-Based Estimates of Unique Events.

    PubMed

    Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N

    2015-08-01

    We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. © 2014 Cognitive Science Society, Inc.

  2. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    PubMed

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  3. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events

    PubMed Central

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate. PMID:28066225

  4. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    PubMed Central

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2014-01-01

    Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016

  5. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution.

    PubMed

    Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.

  6. Exact joint density-current probability function for the asymmetric exclusion process.

    PubMed

    Depken, Martin; Stinchcombe, Robin

    2004-07-23

    We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society

  7. Joint modelling of longitudinal CEA tumour marker progression and survival data on breast cancer

    NASA Astrophysics Data System (ADS)

    Borges, Ana; Sousa, Inês; Castro, Luis

    2017-06-01

    This work proposes the use of Biostatistics methods to study breast cancer in patients of Braga's Hospital Senology Unit, located in Portugal. The primary motivation is to contribute to the understanding of the progression of breast cancer, within the Portuguese population, using a more complex statistical model assumptions than the traditional analysis that take into account a possible existence of a serial correlation structure within a same subject observations. We aim to infer which risk factors aect the survival of Braga's Hospital patients, diagnosed with breast tumour. Whilst analysing risk factors that aect a tumour markers used on the surveillance of disease progression the Carcinoembryonic antigen (CEA). As survival and longitudinal processes may be associated, it is important to model these two processes together. Hence, a joint modelling of these two processes to infer on the association of these was conducted. A data set of 540 patients, along with 50 variables, was collected from medical records of the Hospital. A joint model approach was used to analyse these data. Two dierent joint models were applied to the same data set, with dierent parameterizations which give dierent interpretations to model parameters. These were used by convenience as the ones implemented in R software. Results from the two models were compared. Results from joint models, showed that the longitudinal CEA values were signicantly associated with the survival probability of these patients. A comparison between parameter estimates obtained in this analysis and previous independent survival[4] and longitudinal analysis[5][6], lead us to conclude that independent analysis brings up bias parameter estimates. Hence, an assumption of association between the two processes in a joint model of breast cancer data is necessary. Results indicate that the longitudinal progression of CEA is signicantly associated with the probability of survival of these patients. Hence, an assumption of

  8. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  9. Optimal Power Allocation Strategy in a Joint Bistatic Radar and Communication System Based on Low Probability of Intercept

    PubMed Central

    Wang, Fei; Salous, Sana; Zhou, Jianjiang

    2017-01-01

    In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme. PMID:29186850

  10. Optimal Power Allocation Strategy in a Joint Bistatic Radar and Communication System Based on Low Probability of Intercept.

    PubMed

    Shi, Chenguang; Wang, Fei; Salous, Sana; Zhou, Jianjiang

    2017-11-25

    In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme.

  11. Joint probability of statistical success of multiple phase III trials.

    PubMed

    Zhang, Jianliang; Zhang, Jenny J

    2013-01-01

    In drug development, after completion of phase II proof-of-concept trials, the sponsor needs to make a go/no-go decision to start expensive phase III trials. The probability of statistical success (PoSS) of the phase III trials based on data from earlier studies is an important factor in that decision-making process. Instead of statistical power, the predictive power of a phase III trial, which takes into account the uncertainty in the estimation of treatment effect from earlier studies, has been proposed to evaluate the PoSS of a single trial. However, regulatory authorities generally require statistical significance in two (or more) trials for marketing licensure. We show that the predictive statistics of two future trials are statistically correlated through use of the common observed data from earlier studies. Thus, the joint predictive power should not be evaluated as a simplistic product of the predictive powers of the individual trials. We develop the relevant formulae for the appropriate evaluation of the joint predictive power and provide numerical examples. Our methodology is further extended to the more complex phase III development scenario comprising more than two (K > 2) trials, that is, the evaluation of the PoSS of at least k₀ (k₀≤ K) trials from a program of K total trials. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  13. Modeling Progressive Failure of Bonded Joints Using a Single Joint Finite Element

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott E.; Waas, Anthony M.; Bednarcyk, Brett A.

    2010-01-01

    Enhanced finite elements are elements with an embedded analytical solution which can capture detailed local fields, enabling more efficient, mesh-independent finite element analysis. In the present study, an enhanced finite element is applied to generate a general framework capable of modeling an array of joint types. The joint field equations are derived using the principle of minimum potential energy, and the resulting solutions for the displacement fields are used to generate shape functions and a stiffness matrix for a single joint finite element. This single finite element thus captures the detailed stress and strain fields within the bonded joint, but it can function within a broader structural finite element model. The costs associated with a fine mesh of the joint can thus be avoided while still obtaining a detailed solution for the joint. Additionally, the capability to model non-linear adhesive constitutive behavior has been included within the method, and progressive failure of the adhesive can be modeled by using a strain-based failure criteria and re-sizing the joint as the adhesive fails. Results of the model compare favorably with experimental and finite element results.

  14. Experimental investigation of the intensity fluctuation joint probability and conditional distributions of the twin-beam quantum state.

    PubMed

    Zhang, Yun; Kasai, Katsuyuki; Watanabe, Masayoshi

    2003-01-13

    We give the intensity fluctuation joint probability of the twin-beam quantum state, which was generated with an optical parametric oscillator operating above threshold. Then we present what to our knowledge is the first measurement of the intensity fluctuation conditional probability distributions of twin beams. The measured inference variance of twin beams 0.62+/-0.02, which is less than the standard quantum limit of unity, indicates inference with a precision better than that of separable states. The measured photocurrent variance exhibits a quantum correlation of as much as -4.9+/-0.2 dB between the signal and the idler.

  15. A Modelling Method of Bolt Joints Based on Basic Characteristic Parameters of Joint Surfaces

    NASA Astrophysics Data System (ADS)

    Yuansheng, Li; Guangpeng, Zhang; Zhen, Zhang; Ping, Wang

    2018-02-01

    Bolt joints are common in machine tools and have a direct impact on the overall performance of the tools. Therefore, the understanding of bolt joint characteristics is essential for improving machine design and assembly. Firstly, According to the experimental data obtained from the experiment, the stiffness curve formula was fitted. Secondly, a finite element model of unit bolt joints such as bolt flange joints, bolt head joints, and thread joints was constructed, and lastly the stiffness parameters of joint surfaces were implemented in the model by the secondary development of ABAQUS. The finite element model of the bolt joint established by this method can simulate the contact state very well.

  16. Context-invariant quasi hidden variable (qHV) modelling of all joint von Neumann measurements for an arbitrary Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loubenets, Elena R.

    We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less

  17. Site occupancy models with heterogeneous detection probabilities

    USGS Publications Warehouse

    Royle, J. Andrew

    2006-01-01

    Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.

  18. Probability based models for estimation of wildfire risk

    Treesearch

    Haiganoush Preisler; D. R. Brillinger; R. E. Burgan; John Benoit

    2004-01-01

    We present a probability-based model for estimating fire risk. Risk is defined using three probabilities: the probability of fire occurrence; the conditional probability of a large fire given ignition; and the unconditional probability of a large fire. The model is based on grouped data at the 1 km²-day cell level. We fit a spatially and temporally explicit non-...

  19. Multilevel joint competing risk models

    NASA Astrophysics Data System (ADS)

    Karunarathna, G. H. S.; Sooriyarachchi, M. R.

    2017-09-01

    Joint modeling approaches are often encountered for different outcomes of competing risk time to event and count in many biomedical and epidemiology studies in the presence of cluster effect. Hospital length of stay (LOS) has been the widely used outcome measure in hospital utilization due to the benchmark measurement for measuring multiple terminations such as discharge, transferred, dead and patients who have not completed the event of interest at the follow up period (censored) during hospitalizations. Competing risk models provide a method of addressing such multiple destinations since classical time to event models yield biased results when there are multiple events. In this study, the concept of joint modeling has been applied to the dengue epidemiology in Sri Lanka, 2006-2008 to assess the relationship between different outcomes of LOS and platelet count of dengue patients with the district cluster effect. Two key approaches have been applied to build up the joint scenario. In the first approach, modeling each competing risk separately using the binary logistic model, treating all other events as censored under the multilevel discrete time to event model, while the platelet counts are assumed to follow a lognormal regression model. The second approach is based on the endogeneity effect in the multilevel competing risks and count model. Model parameters were estimated using maximum likelihood based on the Laplace approximation. Moreover, the study reveals that joint modeling approach yield more precise results compared to fitting two separate univariate models, in terms of AIC (Akaike Information Criterion).

  20. A Bayesian Joint Model of Menstrual Cycle Length and Fecundity

    PubMed Central

    Lum, Kirsten J.; Sundaram, Rajeshwari; Louis, Germaine M. Buck; Louis, Thomas A.

    2015-01-01

    Summary Menstrual cycle length (MCL) has been shown to play an important role in couple fecundity, which is the biologic capacity for reproduction irrespective of pregnancy intentions. However, a comprehensive assessment of its role requires a fecundity model that accounts for male and female attributes and the couple’s intercourse pattern relative to the ovulation day. To this end, we employ a Bayesian joint model for MCL and pregnancy. MCLs follow a scale multiplied (accelerated) mixture model with Gaussian and Gumbel components; the pregnancy model includes MCL as a covariate and computes the cycle-specific probability of pregnancy in a menstrual cycle conditional on the pattern of intercourse and no previous fertilization. Day-specific fertilization probability is modeled using natural, cubic splines. We analyze data from the Longitudinal Investigation of Fertility and the Environment Study (the LIFE Study), a couple based prospective pregnancy study, and find a statistically significant quadratic relation between fecundity and menstrual cycle length, after adjustment for intercourse pattern and other attributes, including male semen quality, both partner’s age, and active smoking status (determined by baseline cotinine level 100ng/mL). We compare results to those produced by a more basic model and show the advantages of a more comprehensive approach. PMID:26295923

  1. The perception of probability.

    PubMed

    Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E

    2014-01-01

    We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  2. A probability space for quantum models

    NASA Astrophysics Data System (ADS)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  3. Quantifying Hydroperiod, Fire and Nutrient Effects on the Composition of Plant Communities in Marl Prairie of the Everglades: a Joint Probability Method Based Model

    NASA Astrophysics Data System (ADS)

    Zhai, L.

    2017-12-01

    Plant community can be simultaneously affected by human activities and climate changes, and quantifying and predicting this combined effect on plant community by appropriate model framework which is validated by field data is complex, but very useful to conservation management. Plant communities in the Everglades provide an unique set of conditions to develop and validate this model framework, because they are both experiencing intensive effects of human activities (such as changing hydroperiod by drainage and restoration projects, nutrients from upstream agriculture, prescribed fire, etc.) and climate changes (such as warming, changing precipitation patter, sea level rise, etc.). More importantly, previous research attention focuses on plant communities in slough ecosystem (including ridge, slough and their tree islands), very few studies consider the marl prairie ecosystem. Comparing with slough ecosystem featured by remaining consistently flooded almost year-round, marl prairie has relatively shorter hydroperiod (just in wet-season of one year). Therefore, plant communities of marl prairie may receive more impacts from hydroperiod change. In addition to hydroperiod, fire and nutrients also affect the plant communities in the marl prairie. Therefore, to quantify the combined effects of water level, fire, and nutrients on the composition of the plant communities, we are developing a joint probability method based vegetation dynamic model. Further, the model is being validated by field data about changes of vegetation assemblage along environmental gradients in the marl prairie. Our poster showed preliminary data from our current project.

  4. Probability density function modeling of scalar mixing from concentrated sources in turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Franzese, P.; Boybeyi, Z.

    2007-11-01

    Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.

  5. Improving the Rank Precision of Population Health Measures for Small Areas with Longitudinal and Joint Outcome Models

    PubMed Central

    Athens, Jessica K.; Remington, Patrick L.; Gangnon, Ronald E.

    2015-01-01

    Objectives The University of Wisconsin Population Health Institute has published the County Health Rankings since 2010. These rankings use population-based data to highlight health outcomes and the multiple determinants of these outcomes and to encourage in-depth health assessment for all United States counties. A significant methodological limitation, however, is the uncertainty of rank estimates, particularly for small counties. To address this challenge, we explore the use of longitudinal and pooled outcome data in hierarchical Bayesian models to generate county ranks with greater precision. Methods In our models we used pooled outcome data for three measure groups: (1) Poor physical and poor mental health days; (2) percent of births with low birth weight and fair or poor health prevalence; and (3) age-specific mortality rates for nine age groups. We used the fixed and random effects components of these models to generate posterior samples of rates for each measure. We also used time-series data in longitudinal random effects models for age-specific mortality. Based on the posterior samples from these models, we estimate ranks and rank quartiles for each measure, as well as the probability of a county ranking in its assigned quartile. Rank quartile probabilities for univariate, joint outcome, and/or longitudinal models were compared to assess improvements in rank precision. Results The joint outcome model for poor physical and poor mental health days resulted in improved rank precision, as did the longitudinal model for age-specific mortality rates. Rank precision for low birth weight births and fair/poor health prevalence based on the univariate and joint outcome models were equivalent. Conclusion Incorporating longitudinal or pooled outcome data may improve rank certainty, depending on characteristics of the measures selected. For measures with different determinants, joint modeling neither improved nor degraded rank precision. This approach suggests a simple

  6. Finite element model updating of riveted joints of simplified model aircraft structure

    NASA Astrophysics Data System (ADS)

    Yunus, M. A.; Rani, M. N. Abdul; Sani, M. S. M.; Shah, M. A. S. Aziz

    2018-04-01

    Thin metal sheets are widely used to fabricate a various type of aerospace structures because of its flexibility and easily to form into any type shapes of structure. The riveted joint has turn out to be one of the popular joint types in jointing the aerospace structures because they can be easily be disassembled, maintained and inspected. In this paper, thin metal sheet components are assembled together via riveted joints to form a simplified model of aerospace structure. However, to model the jointed structure that are attached together via the mechanical joints such as riveted joint are very difficult due to local effects. Understandably that the dynamic characteristic of the joined structure can be significantly affected by these joints due to local effects at the mating areas of the riveted joints such as surface contact, clamping force and slips. A few types of element connectors that available in MSC NATRAN/PATRAN have investigated in order to presented as the rivet joints. Thus, the results obtained in term of natural frequencies and mode shapes are then contrasted with experimental counterpart in order to investigate the acceptance level of accuracy between element connectors that are used in modelling the rivet joints of the riveted joints structure. The reconciliation method via finiteelement model updating is used to minimise the discrepancy of the initial finite element model of the riveted joined structure as close as experimental data and their results are discussed.

  7. Nonlinear Modeling of Joint Dominated Structures

    NASA Technical Reports Server (NTRS)

    Chapman, J. M.

    1990-01-01

    The development and verification of an accurate structural model of the nonlinear joint-dominated NASA Langley Mini-Mast truss are described. The approach is to characterize the structural behavior of the Mini-Mast joints and struts using a test configuration that can directly measure the struts' overall stiffness and damping properties, incorporate this data into the structural model using the residual force technique, and then compare the predicted response with empirical data taken by NASA/LaRC during the modal survey tests of the Mini-Mast. A new testing technique, referred to as 'link' testing, was developed and used to test prototype struts of the Mini-Masts. Appreciable nonlinearities including the free-play and hysteresis were demonstrated. Since static and dynamic tests performed on the Mini-Mast also exhibited behavior consistent with joints having free-play and hysteresis, nonlinear models of the Mini-Mast were constructed and analyzed. The Residual Force Technique was used to analyze the nonlinear model of the Mini-Mast having joint free-play and hysteresis.

  8. A joint probability approach for coincidental flood frequency analysis at ungauged basin confluences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Cheng

    2016-03-12

    A reliable and accurate flood frequency analysis at the confluence of streams is of importance. Given that long-term peak flow observations are often unavailable at tributary confluences, at a practical level, this paper presents a joint probability approach (JPA) to address the coincidental flood frequency analysis at the ungauged confluence of two streams based on the flow rate data from the upstream tributaries. One case study is performed for comparison against several traditional approaches, including the position-plotting formula, the univariate flood frequency analysis, and the National Flood Frequency Program developed by US Geological Survey. It shows that the results generatedmore » by the JPA approach agree well with the floods estimated by the plotting position and univariate flood frequency analysis based on the observation data.« less

  9. Convergence of Transition Probability Matrix in CLVMarkov Models

    NASA Astrophysics Data System (ADS)

    Permana, D.; Pasaribu, U. S.; Indratno, S. W.; Suprayogi, S.

    2018-04-01

    A transition probability matrix is an arrangement of transition probability from one states to another in a Markov chain model (MCM). One of interesting study on the MCM is its behavior for a long time in the future. The behavior is derived from one property of transition probabilty matrix for n steps. This term is called the convergence of the n-step transition matrix for n move to infinity. Mathematically, the convergence of the transition probability matrix is finding the limit of the transition matrix which is powered by n where n moves to infinity. The convergence form of the transition probability matrix is very interesting as it will bring the matrix to its stationary form. This form is useful for predicting the probability of transitions between states in the future. The method usually used to find the convergence of transition probability matrix is through the process of limiting the distribution. In this paper, the convergence of the transition probability matrix is searched using a simple concept of linear algebra that is by diagonalizing the matrix.This method has a higher level of complexity because it has to perform the process of diagonalization in its matrix. But this way has the advantage of obtaining a common form of power n of the transition probability matrix. This form is useful to see transition matrix before stationary. For example cases are taken from CLV model using MCM called Model of CLV-Markov. There are several models taken by its transition probability matrix to find its convergence form. The result is that the convergence of the matrix of transition probability through diagonalization has similarity with convergence with commonly used distribution of probability limiting method.

  10. Joint surface modeling with thin-plate splines.

    PubMed

    Boyd, S K; Ronsky, J L; Lichti, D D; Salkauskas, K; Chapman, M A; Salkauskas, D

    1999-10-01

    Mathematical joint surface models based on experimentally determined data points can be used to investigate joint characteristics such as curvature, congruency, cartilage thickness, joint contact areas, as well as to provide geometric information well suited for finite element analysis. Commonly, surface modeling methods are based on B-splines, which involve tensor products. These methods have had success; however, they are limited due to the complex organizational aspect of working with surface patches, and modeling unordered, scattered experimental data points. An alternative method for mathematical joint surface modeling is presented based on the thin-plate spline (TPS). It has the advantage that it does not involve surface patches, and can model scattered data points without experimental data preparation. An analytical surface was developed and modeled with the TPS to quantify its interpolating and smoothing characteristics. Some limitations of the TPS include discontinuity of curvature at exactly the experimental surface data points, and numerical problems dealing with data sets in excess of 2000 points. However, suggestions for overcoming these limitations are presented. Testing the TPS with real experimental data, the patellofemoral joint of a cat was measured with multistation digital photogrammetry and modeled using the TPS to determine cartilage thicknesses and surface curvature. The cartilage thickness distribution ranged between 100 to 550 microns on the patella, and 100 to 300 microns on the femur. It was found that the TPS was an effective tool for modeling joint surfaces because no preparation of the experimental data points was necessary, and the resulting unique function representing the entire surface does not involve surface patches. A detailed algorithm is presented for implementation of the TPS.

  11. Application of multivariate Gaussian detection theory to known non-Gaussian probability density functions

    NASA Astrophysics Data System (ADS)

    Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.

    1995-06-01

    A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.

  12. Joint Control for Dummies: An Elaboration of Lowenkron's Model of Joint (Stimulus) Control

    ERIC Educational Resources Information Center

    Sidener, David W.

    2006-01-01

    The following paper describes Lowenkron's model of joint (stimulus) control. Joint control is described as a means of accounting for performances, especially generalized performances, for which a history of contingency control does not provide an adequate account. Examples are provided to illustrate instances in which joint control may facilitate…

  13. Lithostratigraphic interpretation from joint analysis of seismic tomography and magnetotelluric resistivity models using self-organizing map techniques

    NASA Astrophysics Data System (ADS)

    Bauer, K.; Muñoz, G.; Moeck, I.

    2012-12-01

    The combined interpretation of different models as derived from seismic tomography and magnetotelluric (MT) inversion represents a more efficient approach to determine the lithology of the subsurface compared with the separate treatment of each discipline. Such models can be developed independently or by application of joint inversion strategies. After the step of model generation using different geophysical methodologies, a joint interpretation work flow includes the following steps: (1) adjustment of a joint earth model based on the adapted, identical model geometry for the different methods, (2) classification of the model components (e.g. model blocks described by a set of geophysical parameters), and (3) re-mapping of the classified rock types to visualise their distribution within the earth model, and petrophysical characterization and interpretation. One possible approach for the classification of multi-parameter models is based on statistical pattern recognition, where different models are combined and translated into probability density functions. Classes of rock types are identified in these methods as isolated clusters with high probability density function values. Such techniques are well-established for the analysis of two-parameter models. Alternatively we apply self-organizing map (SOM) techniques, which have no limitations in the number of parameters to be analysed in the joint interpretation. Our SOM work flow includes (1) generation of a joint earth model described by so-called data vectors, (2) unsupervised learning or training, (3) analysis of the feature map by adopting image processing techniques, and (4) application of the knowledge to derive a lithological model which is based on the different geophysical parameters. We show the usage of the SOM work flow for a synthetic and a real data case study. Both tests rely on three geophysical properties: P velocity and vertical velocity gradient from seismic tomography, and electrical resistivity

  14. Model-assisted probability of detection of flaws in aluminum blocks using polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming

    2018-04-01

    Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.

  15. Progressive Damage Modeling of Durable Bonded Joint Technology

    NASA Technical Reports Server (NTRS)

    Leone, Frank A.; Davila, Carlos G.; Lin, Shih-Yung; Smeltzer, Stan; Girolamo, Donato; Ghose, Sayata; Guzman, Juan C.; McCarville, Duglas A.

    2013-01-01

    The development of durable bonded joint technology for assembling composite structures for launch vehicles is being pursued for the U.S. Space Launch System. The present work is related to the development and application of progressive damage modeling techniques to bonded joint technology applicable to a wide range of sandwich structures for a Heavy Lift Launch Vehicle. The joint designs studied in this work include a conventional composite splice joint and a NASA-patented Durable Redundant Joint. Both designs involve a honeycomb sandwich with carbon/epoxy facesheets joined with adhesively bonded doublers. Progressive damage modeling allows for the prediction of the initiation and evolution of damage. For structures that include multiple materials, the number of potential failure mechanisms that must be considered increases the complexity of the analyses. Potential failure mechanisms include fiber fracture, matrix cracking, delamination, core crushing, adhesive failure, and their interactions. The joints were modeled using Abaqus parametric finite element models, in which damage was modeled with user-written subroutines. Each ply was meshed discretely, and layers of cohesive elements were used to account for delaminations and to model the adhesive layers. Good correlation with experimental results was achieved both in terms of load-displacement history and predicted failure mechanisms.

  16. Impact of communities, health, and emotional-related factors on smoking use: comparison of joint modeling of mean and dispersion and Bayes' hierarchical models on add health survey.

    PubMed

    Pu, Jie; Fang, Di; Wilson, Jeffrey R

    2017-02-03

    The analysis of correlated binary data is commonly addressed through the use of conditional models with random effects included in the systematic component as opposed to generalized estimating equations (GEE) models that addressed the random component. Since the joint distribution of the observations is usually unknown, the conditional distribution is a natural approach. Our objective was to compare the fit of different binary models for correlated data in Tabaco use. We advocate that the joint modeling of the mean and dispersion may be at times just as adequate. We assessed the ability of these models to account for the intraclass correlation. In so doing, we concentrated on fitting logistic regression models to address smoking behaviors. Frequentist and Bayes' hierarchical models were used to predict conditional probabilities, and the joint modeling (GLM and GAM) models were used to predict marginal probabilities. These models were fitted to National Longitudinal Study of Adolescent to Adult Health (Add Health) data for Tabaco use. We found that people were less likely to smoke if they had higher income, high school or higher education and religious. Individuals were more likely to smoke if they had abused drug or alcohol, spent more time on TV and video games, and been arrested. Moreover, individuals who drank alcohol early in life were more likely to be a regular smoker. Children who experienced mistreatment from their parents were more likely to use Tabaco regularly. The joint modeling of the mean and dispersion models offered a flexible and meaningful method of addressing the intraclass correlation. They do not require one to identify random effects nor distinguish from one level of the hierarchy to the other. Moreover, once one can identify the significant random effects, one can obtain similar results to the random coefficient models. We found that the set of marginal models accounting for extravariation through the additional dispersion submodel produced

  17. Bayesian informative dropout model for longitudinal binary data with random effects using conditional and joint modeling approaches.

    PubMed

    Chan, Jennifer S K

    2016-05-01

    Dropouts are common in longitudinal study. If the dropout probability depends on the missing observations at or after dropout, this type of dropout is called informative (or nonignorable) dropout (ID). Failure to accommodate such dropout mechanism into the model will bias the parameter estimates. We propose a conditional autoregressive model for longitudinal binary data with an ID model such that the probabilities of positive outcomes as well as the drop-out indicator in each occasion are logit linear in some covariates and outcomes. This model adopting a marginal model for outcomes and a conditional model for dropouts is called a selection model. To allow for the heterogeneity and clustering effects, the outcome model is extended to incorporate mixture and random effects. Lastly, the model is further extended to a novel model that models the outcome and dropout jointly such that their dependency is formulated through an odds ratio function. Parameters are estimated by a Bayesian approach implemented using the user-friendly Bayesian software WinBUGS. A methadone clinic dataset is analyzed to illustrate the proposed models. Result shows that the treatment time effect is still significant but weaker after allowing for an ID process in the data. Finally the effect of drop-out on parameter estimates is evaluated through simulation studies. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. A stochastic Iwan-type model for joint behavior variability modeling

    NASA Astrophysics Data System (ADS)

    Mignolet, Marc P.; Song, Pengchao; Wang, X. Q.

    2015-08-01

    This paper focuses overall on the development and validation of a stochastic model to describe the dissipation and stiffness properties of a bolted joint for which experimental data is available and exhibits a large scatter. An extension of the deterministic parallel-series Iwan model for the characterization of the force-displacement behavior of joints is first carried out. This new model involves dynamic and static coefficients of friction differing from each other and a broadly defined distribution of Jenkins elements. Its applicability is next investigated using the experimental data, i.e. stiffness and dissipation measurements obtained in harmonic testing of 9 nominally identical bolted joints. The model is found to provide a very good fit of the experimental data for each bolted joint notwithstanding the significant variability of their behavior. This finding suggests that this variability can be simulated through the randomization of only the parameters of the proposed Iwan-type model. The distribution of these parameters is next selected based on maximum entropy concepts and their corresponding parameters, i.e. the hyperparameters of the model, are identified using a maximum likelihood strategy. Proceeding with a Monte Carlo simulation of this stochastic Iwan model demonstrates that the experimental data fits well within the uncertainty band corresponding to the 5th and 95th percentiles of the model predictions which well supports the adequacy of the modeling effort.

  19. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  20. Generalized Models for Rock Joint Surface Shapes

    PubMed Central

    Du, Shigui; Hu, Yunjin; Hu, Xiaofei

    2014-01-01

    Generalized models of joint surface shapes are the foundation for mechanism studies on the mechanical effects of rock joint surface shapes. Based on extensive field investigations of rock joint surface shapes, generalized models for three level shapes named macroscopic outline, surface undulating shape, and microcosmic roughness were established through statistical analyses of 20,078 rock joint surface profiles. The relative amplitude of profile curves was used as a borderline for the division of different level shapes. The study results show that the macroscopic outline has three basic features such as planar, arc-shaped, and stepped; the surface undulating shape has three basic features such as planar, undulating, and stepped; and the microcosmic roughness has two basic features such as smooth and rough. PMID:25152901

  1. Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model

    PubMed Central

    Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070

  2. Joint segmentation and deformable registration of brain scans guided by a tumor growth model.

    PubMed

    Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.

  3. Dynamic Uncertain Causality Graph for Knowledge Representation and Probabilistic Reasoning: Directed Cyclic Graph and Joint Probability Distribution.

    PubMed

    Zhang, Qin

    2015-07-01

    Probabilistic graphical models (PGMs) such as Bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning. Dynamic uncertain causality graph (DUCG) is a newly presented model of PGMs, which can be applied to fault diagnosis of large and complex industrial systems, disease diagnosis, and so on. The basic methodology of DUCG has been previously presented, in which only the directed acyclic graph (DAG) was addressed. However, the mathematical meaning of DUCG was not discussed. In this paper, the DUCG with directed cyclic graphs (DCGs) is addressed. In contrast, BN does not allow DCGs, as otherwise the conditional independence will not be satisfied. The inference algorithm for the DUCG with DCGs is presented, which not only extends the capabilities of DUCG from DAGs to DCGs but also enables users to decompose a large and complex DUCG into a set of small, simple sub-DUCGs, so that a large and complex knowledge base can be easily constructed, understood, and maintained. The basic mathematical definition of a complete DUCG with or without DCGs is proved to be a joint probability distribution (JPD) over a set of random variables. The incomplete DUCG as a part of a complete DUCG may represent a part of JPD. Examples are provided to illustrate the methodology.

  4. A Quantum Probability Model of Causal Reasoning

    PubMed Central

    Trueblood, Jennifer S.; Busemeyer, Jerome R.

    2012-01-01

    People can often outperform statistical methods and machine learning algorithms in situations that involve making inferences about the relationship between causes and effects. While people are remarkably good at causal reasoning in many situations, there are several instances where they deviate from expected responses. This paper examines three situations where judgments related to causal inference problems produce unexpected results and describes a quantum inference model based on the axiomatic principles of quantum probability theory that can explain these effects. Two of the three phenomena arise from the comparison of predictive judgments (i.e., the conditional probability of an effect given a cause) with diagnostic judgments (i.e., the conditional probability of a cause given an effect). The third phenomenon is a new finding examining order effects in predictive causal judgments. The quantum inference model uses the notion of incompatibility among different causes to account for all three phenomena. Psychologically, the model assumes that individuals adopt different points of view when thinking about different causes. The model provides good fits to the data and offers a coherent account for all three causal reasoning effects thus proving to be a viable new candidate for modeling human judgment. PMID:22593747

  5. Farm Management, Environment, and Weather Factors Jointly Affect the Probability of Spinach Contamination by Generic Escherichia coli at the Preharvest Stage

    PubMed Central

    Navratil, Sarah; Gregory, Ashley; Bauer, Arin; Srinath, Indumathi; Szonyi, Barbara; Nightingale, Kendra; Anciso, Juan; Jun, Mikyoung; Han, Daikwon; Lawhon, Sara; Ivanek, Renata

    2014-01-01

    The National Resources Information (NRI) databases provide underutilized information on the local farm conditions that may predict microbial contamination of leafy greens at preharvest. Our objective was to identify NRI weather and landscape factors affecting spinach contamination with generic Escherichia coli individually and jointly with farm management and environmental factors. For each of the 955 georeferenced spinach samples (including 63 positive samples) collected between 2010 and 2012 on 12 farms in Colorado and Texas, we extracted variables describing the local weather (ambient temperature, precipitation, and wind speed) and landscape (soil characteristics and proximity to roads and water bodies) from NRI databases. Variables describing farm management and environment were obtained from a survey of the enrolled farms. The variables were evaluated using a mixed-effect logistic regression model with random effects for farm and date. The model identified precipitation as a single NRI predictor of spinach contamination with generic E. coli, indicating that the contamination probability increases with an increasing mean amount of rain (mm) in the past 29 days (odds ratio [OR] = 3.5). The model also identified the farm's hygiene practices as a protective factor (OR = 0.06) and manure application (OR = 52.2) and state (OR = 108.1) as risk factors. In cross-validation, the model showed a solid predictive performance, with an area under the receiver operating characteristic (ROC) curve of 81%. Overall, the findings highlighted the utility of NRI precipitation data in predicting contamination and demonstrated that farm management, environment, and weather factors should be considered jointly in development of good agricultural practices and measures to reduce produce contamination. PMID:24509926

  6. Joint probabilities of extreme precipitation and wind gusts in Germany

    NASA Astrophysics Data System (ADS)

    von Waldow, H.; Martius, O.

    2012-04-01

    Extreme meteorological events such as storms, heavy rain, floods, droughts and heat waves can have devastating consequences for human health, infrastructure and ecosystems. Concomitantly occurring extreme events might interact synergistically to produce a particularly hazardous impact. The joint occurrence of droughts and heat waves, for example, can have a very different impact on human health and ecosystems both in quantity and quality, than just one of the two extreme events. The co-occurrence of certain types of extreme events is plausible from physical and dynamical considerations, for example heavy precipitation and high wind speeds in the pathway of strong extratropical cyclones. The winter storm Kyrill not only caused wind gust speeds well in excess of 30 m/s across Europe, but also brought 24 h precipitation sums greater than the mean January accumulations in some regions. However, the existence of such compound risks is currently not accounted for by insurance companies, who assume independence of extreme weather events to calculate their premiums. While there are established statistical methods to model the extremes of univariate meteorological variables, the modelling of multidimensional extremes calls for an approach that is tailored to the specific problem at hand. A first step involves defining extreme bivariate wind/precipitation events. Because precipitation and wind gusts caused by the same cyclone or convective cell do not occur at exactly the same location and at the same time, it is necessary to find a sound definition of "extreme compound event" for this case. We present a data driven method to choose appropriate time and space intervals that define "concomitance" for wind and precipitation extremes. Based on station data of wind speed and gridded precipitation data, we arrive at time and space intervals that compare well with the typical time and space scales of extratropical cyclones, i.e. a maximum time lag of 1 day and a maximum distance

  7. Real-time individual predictions of prostate cancer recurrence using joint models

    PubMed Central

    Taylor, Jeremy M. G.; Park, Yongseok; Ankerst, Donna P.; Proust-Lima, Cecile; Williams, Scott; Kestin, Larry; Bae, Kyoungwha; Pickles, Tom; Sandler, Howard

    2012-01-01

    Summary Patients who were previously treated for prostate cancer with radiation therapy are monitored at regular intervals using a laboratory test called Prostate Specific Antigen (PSA). If the value of the PSA test starts to rise, this is an indication that the prostate cancer is more likely to recur, and the patient may wish to initiate new treatments. Such patients could be helped in making medical decisions by an accurate estimate of the probability of recurrence of the cancer in the next few years. In this paper, we describe the methodology for giving the probability of recurrence for a new patient, as implemented on a web-based calculator. The methods use a joint longitudinal survival model. The model is developed on a training dataset of 2,386 patients and tested on a dataset of 846 patients. Bayesian estimation methods are used with one Markov chain Monte Carlo (MCMC) algorithm developed for estimation of the parameters from the training dataset and a second quick MCMC developed for prediction of the risk of recurrence that uses the longitudinal PSA measures from a new patient. PMID:23379600

  8. Maximum parsimony, substitution model, and probability phylogenetic trees.

    PubMed

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  9. The Efficacy of Using Diagrams When Solving Probability Word Problems in College

    ERIC Educational Resources Information Center

    Beitzel, Brian D.; Staley, Richard K.

    2015-01-01

    Previous experiments have shown a deleterious effect of visual representations on college students' ability to solve total- and joint-probability word problems. The present experiments used conditional-probability problems, known to be more difficult than total- and joint-probability problems. The diagram group was instructed in how to use tree…

  10. Joint genome-wide prediction in several populations accounting for randomness of genotypes: A hierarchical Bayes approach. I: Multivariate Gaussian priors for marker effects and derivation of the joint probability mass function of genotypes.

    PubMed

    Martínez, Carlos Alberto; Khare, Kshitij; Banerjee, Arunava; Elzo, Mauricio A

    2017-03-21

    It is important to consider heterogeneity of marker effects and allelic frequencies in across population genome-wide prediction studies. Moreover, all regression models used in genome-wide prediction overlook randomness of genotypes. In this study, a family of hierarchical Bayesian models to perform across population genome-wide prediction modeling genotypes as random variables and allowing population-specific effects for each marker was developed. Models shared a common structure and differed in the priors used and the assumption about residual variances (homogeneous or heterogeneous). Randomness of genotypes was accounted for by deriving the joint probability mass function of marker genotypes conditional on allelic frequencies and pedigree information. As a consequence, these models incorporated kinship and genotypic information that not only permitted to account for heterogeneity of allelic frequencies, but also to include individuals with missing genotypes at some or all loci without the need for previous imputation. This was possible because the non-observed fraction of the design matrix was treated as an unknown model parameter. For each model, a simpler version ignoring population structure, but still accounting for randomness of genotypes was proposed. Implementation of these models and computation of some criteria for model comparison were illustrated using two simulated datasets. Theoretical and computational issues along with possible applications, extensions and refinements were discussed. Some features of the models developed in this study make them promising for genome-wide prediction, the use of information contained in the probability distribution of genotypes is perhaps the most appealing. Further studies to assess the performance of the models proposed here and also to compare them with conventional models used in genome-wide prediction are needed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Modeling joint restoration strategies for interdependent infrastructure systems

    PubMed Central

    Simonovic, Slobodan P.

    2018-01-01

    Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems. PMID:29649300

  12. Modeling joint restoration strategies for interdependent infrastructure systems.

    PubMed

    Zhang, Chao; Kong, Jingjing; Simonovic, Slobodan P

    2018-01-01

    Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems.

  13. An equivalent viscoelastic model for rock mass with parallel joints

    NASA Astrophysics Data System (ADS)

    Li, Jianchun; Ma, Guowei; Zhao, Jian

    2010-03-01

    An equivalent viscoelastic medium model is proposed for rock mass with parallel joints. A concept of "virtual wave source (VWS)" is proposed to take into account the wave reflections between the joints. The equivalent model can be effectively applied to analyze longitudinal wave propagation through discontinuous media with parallel joints. Parameters in the equivalent viscoelastic model are derived analytically based on longitudinal wave propagation across a single rock joint. The proposed model is then verified by applying identical incident waves to the discontinuous and equivalent viscoelastic media at one end to compare the output waves at the other end. When the wavelength of the incident wave is sufficiently long compared to the joint spacing, the effect of the VWS on wave propagation in rock mass is prominent. The results from the equivalent viscoelastic medium model are very similar to those determined from the displacement discontinuity method. Frequency dependence and joint spacing effect on the equivalent viscoelastic model and the VWS method are discussed.

  14. Experimental measurement and modeling analysis on mechanical properties of incudostapedial joint

    PubMed Central

    Zhang, Xiangming

    2011-01-01

    The incudostapedial (IS) joint between the incus and stapes is a synovial joint consisting of joint capsule, cartilage, and synovial fluid. The mechanical properties of the IS joint directly affect the middle ear transfer function for sound transmission. However, due to the complexity and small size of the joint, the mechanical properties of the IS joint have not been reported in the literature. In this paper, we report our current study on mechanical properties of human IS joint using both experimental measurement and finite element (FE) modeling analysis. Eight IS joint samples with the incus and stapes attached were harvested from human cadaver temporal bones. Tension, compression, stress relaxation and failure tests were performed on those samples in a micro-material testing system. An analytical approach with the hyperelastic Ogden model and a 3D FE model of the IS joint including the cartilage, joint capsule, and synovial fluid were employed to derive mechanical parameters of the IS joint. The comparison of measurements and modeling results reveals the relationship between the mechanical properties and structure of the IS joint. PMID:21061141

  15. Experimental measurement and modeling analysis on mechanical properties of incudostapedial joint.

    PubMed

    Zhang, Xiangming; Gan, Rong Z

    2011-10-01

    The incudostapedial (IS) joint between the incus and stapes is a synovial joint consisting of joint capsule, cartilage, and synovial fluid. The mechanical properties of the IS joint directly affect the middle ear transfer function for sound transmission. However, due to the complexity and small size of the joint, the mechanical properties of the IS joint have not been reported in the literature. In this paper, we report our current study on mechanical properties of human IS joint using both experimental measurement and finite element (FE) modeling analysis. Eight IS joint samples with the incus and stapes attached were harvested from human cadaver temporal bones. Tension, compression, stress relaxation and failure tests were performed on those samples in a micro-material testing system. An analytical approach with the hyperelastic Ogden model and a 3D FE model of the IS joint including the cartilage, joint capsule, and synovial fluid were employed to derive mechanical parameters of the IS joint. The comparison of measurements and modeling results reveals the relationship between the mechanical properties and structure of the IS joint.

  16. Computational Modelling and Movement Analysis of Hip Joint with Muscles

    NASA Astrophysics Data System (ADS)

    Siswanto, W. A.; Yoon, C. C.; Salleh, S. Md.; Ngali, M. Z.; Yusup, Eliza M.

    2017-01-01

    In this study, the model of hip joint and the main muscles are modelled by finite elements. The parts included in the model are hip joint, hemi pelvis, gluteus maximus, quadratus femoris and gamellus inferior. The materials that used in these model are isotropic elastic, Mooney Rivlin and Neo-hookean. The hip resultant force of the normal gait and stair climbing are applied on the model of hip joint. The responses of displacement, stress and strain of the muscles are then recorded. FEBio non-linear solver for biomechanics is employed to conduct the simulation of the model of hip joint with muscles. The contact interfaces that used in this model are sliding contact and tied contact. From the analysis results, the gluteus maximus has the maximum displacement, stress and strain in the stair climbing. Quadratus femoris and gamellus inferior has the maximum displacement and strain in the normal gait however the maximum stress in the stair climbing. Besides that, the computational model of hip joint with muscles is produced for research and investigation platform. The model can be used as a visualization platform of hip joint.

  17. Model-Based Estimation of Ankle Joint Stiffness

    PubMed Central

    Misgeld, Berno J. E.; Zhang, Tony; Lüken, Markus J.; Leonhardt, Steffen

    2017-01-01

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements. PMID:28353683

  18. Probability bounds analysis for nonlinear population ecology models.

    PubMed

    Enszer, Joshua A; Andrei Măceș, D; Stadtherr, Mark A

    2015-09-01

    Mathematical models in population ecology often involve parameters that are empirically determined and inherently uncertain, with probability distributions for the uncertainties not known precisely. Propagating such imprecise uncertainties rigorously through a model to determine their effect on model outputs can be a challenging problem. We illustrate here a method for the direct propagation of uncertainties represented by probability bounds though nonlinear, continuous-time, dynamic models in population ecology. This makes it possible to determine rigorous bounds on the probability that some specified outcome for a population is achieved, which can be a core problem in ecosystem modeling for risk assessment and management. Results can be obtained at a computational cost that is considerably less than that required by statistical sampling methods such as Monte Carlo analysis. The method is demonstrated using three example systems, with focus on a model of an experimental aquatic food web subject to the effects of contamination by ionic liquids, a new class of potentially important industrial chemicals. Copyright © 2015. Published by Elsevier Inc.

  19. The probability heuristics model of syllogistic reasoning.

    PubMed

    Chater, N; Oaksford, M

    1999-03-01

    A probability heuristic model (PHM) for syllogistic reasoning is proposed. An informational ordering over quantified statements suggests simple probability based heuristics for syllogistic reasoning. The most important is the "min-heuristic": choose the type of the least informative premise as the type of the conclusion. The rationality of this heuristic is confirmed by an analysis of the probabilistic validity of syllogistic reasoning which treats logical inference as a limiting case of probabilistic inference. A meta-analysis of past experiments reveals close fits with PHM. PHM also compares favorably with alternative accounts, including mental logics, mental models, and deduction as verbal reasoning. Crucially, PHM extends naturally to generalized quantifiers, such as Most and Few, which have not been characterized logically and are, consequently, beyond the scope of current mental logic and mental model theories. Two experiments confirm the novel predictions of PHM when generalized quantifiers are used in syllogistic arguments. PHM suggests that syllogistic reasoning performance may be determined by simple but rational informational strategies justified by probability theory rather than by logic. Copyright 1999 Academic Press.

  20. A model to explain joint patterns found in ignimbrite deposits

    NASA Astrophysics Data System (ADS)

    Tibaldi, A.; Bonali, F. L.

    2018-03-01

    The study of fracture systems is of paramount importance for economic applications, such as CO2 storage in rock successions, geothermal and hydrocarbon exploration and exploitation, and also for a better knowledge of seismogenic fault formation. Understanding the origin of joints can be useful for tectonic studies and for a geotechnical characterisation of rock masses. Here, we illustrate a joint pattern discovered in ignimbrite deposits of South America, which can be confused with conjugate tectonic joint sets but which have another origin. The pattern is probably common, but recognisable only in plan view and before tectonic deformation obscures and overprints it. Key sites have been mostly studied by field surveys in Bolivia and Chile. The pattern is represented by hundreds-of-meters up to kilometre-long swarms of master joints, which show circular to semi-circular geometries and intersections that have "X" and "Y" patterns. Inside each swarm, joints are systematic, rectilinear or curvilinear in plan view, and as much as 900 m long. In section view, they are from sub-vertical to vertical and do not affect the underlying deposits. Joints with different orientation mostly interrupt each other, suggesting they have the same age. This joint architecture is here interpreted as resulting from differential contraction after emplacement of the ignimbrite deposit above a complex topography. The set of the joint pattern that has suitable orientation with respect to tectonic stresses may act to nucleate faults.

  1. Multiple model cardinalized probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  2. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  3. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  4. Independent events in elementary probability theory

    NASA Astrophysics Data System (ADS)

    Csenki, Attila

    2011-07-01

    In Probability and Statistics taught to mathematicians as a first introduction or to a non-mathematical audience, joint independence of events is introduced by requiring that the multiplication rule is satisfied. The following statement is usually tacitly assumed to hold (and, at best, intuitively motivated): If the n events E 1, E 2, … , E n are jointly independent then any two events A and B built in finitely many steps from two disjoint subsets of E 1, E 2, … , E n are also independent. The operations 'union', 'intersection' and 'complementation' are permitted only when forming the events A and B. Here we examine this statement from the point of view of elementary probability theory. The approach described here is accessible also to users of probability theory and is believed to be novel.

  5. Modality, probability, and mental models.

    PubMed

    Hinterecker, Thomas; Knauff, Markus; Johnson-Laird, P N

    2016-10-01

    We report 3 experiments investigating novel sorts of inference, such as: A or B or both. Therefore, possibly (A and B). Where the contents were sensible assertions, for example, Space tourism will achieve widespread popularity in the next 50 years or advances in material science will lead to the development of antigravity materials in the next 50 years, or both . Most participants accepted the inferences as valid, though they are invalid in modal logic and in probabilistic logic too. But, the theory of mental models predicts that individuals should accept them. In contrast, inferences of this sort—A or B but not both. Therefore, A or B or both—are both logically valid and probabilistically valid. Yet, as the model theory also predicts, most reasoners rejected them. The participants’ estimates of probabilities showed that their inferences tended not to be based on probabilistic validity, but that they did rate acceptable conclusions as more probable than unacceptable conclusions. We discuss the implications of the results for current theories of reasoning. PsycINFO Database Record (c) 2016 APA, all rights reserved

  6. The research of laryngeal joints to reconstruction and modeling.

    PubMed

    Zhang, Yi; Shi, Tingchun

    2014-01-01

    Larynx has a complex structure with joints and multiple functions. In order to study the artificial larynx and artificial auricle scaffold, a three-dimensional digital model of laryngeal joint is established in this paper using MIMICS with its biomechanical properties analyzed and calculated by using the finite element method. This model is based on the CT scanned images of 281 layers with an interlamellar spacing of 1.25 mm. The obtained data are denoised, segmented and smoothed before being loaded into MIMICS. By further optimizations, an accurate and complete 3D model can be obtained. Subsequently, a 3D FEM of the normal larynx joint is performed which allows observations from any dimensions and angles. Compared with natural laryngeal joint, this model has good geometric similarity and mechanically similar throat voicing functions.

  7. Joint model-based clustering of nonlinear longitudinal trajectories and associated time-to-event data analysis, linked by latent class membership: with application to AIDS clinical studies.

    PubMed

    Huang, Yangxin; Lu, Xiaosun; Chen, Jiaqing; Liang, Juan; Zangmeister, Miriam

    2017-10-27

    Longitudinal and time-to-event data are often observed together. Finite mixture models are currently used to analyze nonlinear heterogeneous longitudinal data, which, by releasing the homogeneity restriction of nonlinear mixed-effects (NLME) models, can cluster individuals into one of the pre-specified classes with class membership probabilities. This clustering may have clinical significance, and be associated with clinically important time-to-event data. This article develops a joint modeling approach to a finite mixture of NLME models for longitudinal data and proportional hazard Cox model for time-to-event data, linked by individual latent class indicators, under a Bayesian framework. The proposed joint models and method are applied to a real AIDS clinical trial data set, followed by simulation studies to assess the performance of the proposed joint model and a naive two-step model, in which finite mixture model and Cox model are fitted separately.

  8. Modeling the probability distribution of peak discharge for infiltrating hillslopes

    NASA Astrophysics Data System (ADS)

    Baiamonte, Giorgio; Singh, Vijay P.

    2017-07-01

    Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.

  9. PARTS: Probabilistic Alignment for RNA joinT Secondary structure prediction

    PubMed Central

    Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H.

    2008-01-01

    A novel method is presented for joint prediction of alignment and common secondary structures of two RNA sequences. The joint consideration of common secondary structures and alignment is accomplished by structural alignment over a search space defined by the newly introduced motif called matched helical regions. The matched helical region formulation generalizes previously employed constraints for structural alignment and thereby better accommodates the structural variability within RNA families. A probabilistic model based on pseudo free energies obtained from precomputed base pairing and alignment probabilities is utilized for scoring structural alignments. Maximum a posteriori (MAP) common secondary structures, sequence alignment and joint posterior probabilities of base pairing are obtained from the model via a dynamic programming algorithm called PARTS. The advantage of the more general structural alignment of PARTS is seen in secondary structure predictions for the RNase P family. For this family, the PARTS MAP predictions of secondary structures and alignment perform significantly better than prior methods that utilize a more restrictive structural alignment model. For the tRNA and 5S rRNA families, the richer structural alignment model of PARTS does not offer a benefit and the method therefore performs comparably with existing alternatives. For all RNA families studied, the posterior probability estimates obtained from PARTS offer an improvement over posterior probability estimates from a single sequence prediction. When considering the base pairings predicted over a threshold value of confidence, the combination of sensitivity and positive predictive value is superior for PARTS than for the single sequence prediction. PARTS source code is available for download under the GNU public license at http://rna.urmc.rochester.edu. PMID:18304945

  10. Ceramic joints

    DOEpatents

    Miller, Bradley J.; Patten, Jr., Donald O.

    1991-01-01

    Butt joints between materials having different coefficients of thermal expansion are prepared having a reduced probability of failure of stress facture. This is accomplished by narrowing/tapering the material having the lower coefficient of thermal expansion in a direction away from the joint interface and not joining the narrow-tapered surface to the material having the higher coefficient of thermal expansion.

  11. On probability-possibility transformations

    NASA Technical Reports Server (NTRS)

    Klir, George J.; Parviz, Behzad

    1992-01-01

    Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.

  12. The Gaussian copula model for the joint deficit index for droughts

    NASA Astrophysics Data System (ADS)

    Van de Vyver, H.; Van den Bergh, J.

    2018-06-01

    The characterization of droughts and their impacts is very dependent on the time scale that is involved. In order to obtain an overall drought assessment, the cumulative effects of water deficits over different times need to be examined together. For example, the recently developed joint deficit index (JDI) is based on multivariate probabilities of precipitation over various time scales from 1- to 12-months, and was constructed from empirical copulas. In this paper, we examine the Gaussian copula model for the JDI. We model the covariance across the temporal scales with a two-parameter function that is commonly used in the specific context of spatial statistics or geostatistics. The validity of the covariance models is demonstrated with long-term precipitation series. Bootstrap experiments indicate that the Gaussian copula model has advantages over the empirical copula method in the context of drought severity assessment: (i) it is able to quantify droughts outside the range of the empirical copula, (ii) provides adequate drought quantification, and (iii) provides a better understanding of the uncertainty in the estimation.

  13. Probability Modeling and Thinking: What Can We Learn from Practice?

    ERIC Educational Resources Information Center

    Pfannkuch, Maxine; Budgett, Stephanie; Fewster, Rachel; Fitch, Marie; Pattenwise, Simeon; Wild, Chris; Ziedins, Ilze

    2016-01-01

    Because new learning technologies are enabling students to build and explore probability models, we believe that there is a need to determine the big enduring ideas that underpin probabilistic thinking and modeling. By uncovering the elements of the thinking modes of expert users of probability models we aim to provide a base for the setting of…

  14. An experimentally based nonlinear viscoelastic model of joint passive moment.

    PubMed

    Esteki, A; Mansour, J M

    1996-04-01

    Previous investigations have not converged on a generally accepted model of the dissipative part of joint passive moment. To provide a basis for developing a model, a series of measurements were performed to characterize the passive moment at the metacarpophalangeal joint of the index finger. Two measurement procedures were used, one in moment relaxation over a range of fixed joint angles and the other at a series of constant joint velocities. Fung's quasi-linear viscoelastic theory motivated the development of the passive moment model. Using this approach, it was not necessary to make restrictive assumptions regarding the viscoelastic behavior of the passive moment. The generality of the formulation allowed specific functions to be chosen based on experimental data rather than finding coefficients which attempted to fit a preselected model of the data. It was shown that a nonlinear viscoelastic model described the passive stiffness. No significant frictional effects were found. Of particular importance was the nonlinear behavior of the dissipative part of the passive moment which was modeled by joint speed raised to a power less than one. This result could explain the differing findings among previous investigations, and may have important implications for control of limb movement.

  15. The relationship between species detection probability and local extinction probability

    USGS Publications Warehouse

    Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.

    2004-01-01

    In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are < 1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.

  16. The influence of patellofemoral joint contact geometry on the modeling of three dimensional patellofemoral joint forces.

    PubMed

    Powers, Christopher M; Chen, Yu-Jen; Scher, Irving; Lee, Thay Q

    2006-01-01

    The purpose of this study was to determine the influence of patellofemoral joint contact geometry on the modeling of three-dimensional patellofemoral joint forces. To achieve this goal, patellofemoral joint reaction forces (PFJRFs) that were measured from an in-vitro cadaveric set-up were compared to PFJRFs estimated from a computer model that did not consider patellofemoral joint contact geometry. Ten cadaver knees were used in this study. Each was mounted on a custom jig that was fixed to an Instron frame. Quadriceps muscle loads were accomplished using a pulley system and weights. The force in the patellar ligament was obtained using a buckle transducer. To quantify the magnitude and direction of the PFJRF, a six-axis load cell was incorporated into the femoral fixation system so that a rigid body assumption could be made. PFJRF data were obtained at 0 degrees , 20 degrees , 40 degrees and 60 degrees of knee flexion. Following in vitro testing, SIMM modeling software was used to develop computational models based on the three-dimensional coordinates (Microscribe digitizer) of individual muscle and patellar ligament force vectors obtained from the cadaver knees. The overall magnitude of the PFJRF estimated from the computer generated models closely matched the direct measurements from the in vitro set-up (Pearson's correlation coefficient, R(2)=0.91, p<0.001). Although the computational model accurately estimated the posteriorly directed forces acting on the joint, some discrepancies were noted in the forces acting in the superior and lateral directions. These differences however, were relatively small when expressed as a total of the overall PFJRF magnitude.

  17. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    PubMed

    Seaman, Shaun R; Hughes, Rachael A

    2018-06-01

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.

  18. A musculoskeletal model of the elbow joint complex

    NASA Technical Reports Server (NTRS)

    Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.

    1993-01-01

    This paper describes a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal model in the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint complex movements. In general, the model predicted kinematic and muscle excitation patterns similar to what was experimentally measured.

  19. Joint search and sensor management for geosynchronous satellites

    NASA Astrophysics Data System (ADS)

    Zatezalo, A.; El-Fallah, A.; Mahler, R.; Mehra, R. K.; Pham, K.

    2008-04-01

    Joint search and sensor management for space situational awareness presents daunting scientific and practical challenges as it requires a simultaneous search for new, and the catalog update of the current space objects. We demonstrate a new approach to joint search and sensor management by utilizing the Posterior Expected Number of Targets (PENT) as the objective function, an observation model for a space-based EO/IR sensor, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker. Simulation and results using actual Geosynchronous Satellites are presented.

  20. Numerical simulation of artificial hip joint motion based on human age factor

    NASA Astrophysics Data System (ADS)

    Ramdhani, Safarudin; Saputra, Eko; Jamari, J.

    2018-05-01

    Artificial hip joint is a prosthesis (synthetic body part) which usually consists of two or more components. Replacement of the hip joint due to the occurrence of arthritis, ordinarily patients aged or older. Numerical simulation models are used to observe the range of motion in the artificial hip joint, the range of motion of joints used as the basis of human age. Finite- element analysis (FEA) is used to calculate stress von mises in motion and observes a probability of prosthetic impingement. FEA uses a three-dimensional nonlinear model and considers the position variation of acetabular liner cups. The result of numerical simulation shows that FEA method can be used to analyze the performance calculation of the artificial hip joint at this time more accurate than conventional method.

  1. Joint modelling rationale for chained equations

    PubMed Central

    2014-01-01

    Background Chained equations imputation is widely used in medical research. It uses a set of conditional models, so is more flexible than joint modelling imputation for the imputation of different types of variables (e.g. binary, ordinal or unordered categorical). However, chained equations imputation does not correspond to drawing from a joint distribution when the conditional models are incompatible. Concurrently with our work, other authors have shown the equivalence of the two imputation methods in finite samples. Methods Taking a different approach, we prove, in finite samples, sufficient conditions for chained equations and joint modelling to yield imputations from the same predictive distribution. Further, we apply this proof in four specific cases and conduct a simulation study which explores the consequences when the conditional models are compatible but the conditions otherwise are not satisfied. Results We provide an additional “non-informative margins” condition which, together with compatibility, is sufficient. We show that the non-informative margins condition is not satisfied, despite compatible conditional models, in a situation as simple as two continuous variables and one binary variable. Our simulation study demonstrates that as a consequence of this violation order effects can occur; that is, systematic differences depending upon the ordering of the variables in the chained equations algorithm. However, the order effects appear to be small, especially when associations between variables are weak. Conclusions Since chained equations is typically used in medical research for datasets with different types of variables, researchers must be aware that order effects are likely to be ubiquitous, but our results suggest they may be small enough to be negligible. PMID:24559129

  2. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  3. Accumulative Probability Model for Automated Network Traffic Analyses

    DOT National Transportation Integrated Search

    1972-10-01

    THE REPORT PRESENTS AN ILLUSTRATION OF THE ACCUMULATIVE PROBABILITY MODEL WHICH IS APPLICABLE TO GROUND TRANSPORTATION SYSTEMS WHERE HIGH-SPEED AND CLOSE HEADWAYS ARE A PERFORMANCE REQUIREMENT. THE PAPER DESCRIBES THE MODEL, ILLUSTRATES IT WITH A HYP...

  4. Probability distribution of haplotype frequencies under the two-locus Wright-Fisher model by diffusion approximation.

    PubMed

    Boitard, Simon; Loisel, Patrice

    2007-05-01

    The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.

  5. Preclinical Animal Models for Temporomandibular Joint Tissue Engineering.

    PubMed

    Almarza, Alejandro J; Brown, Bryan N; Arzi, Boaz; Ângelo, David Faustino; Chung, William; Badylak, Stephen F; Detamore, Michael

    2018-06-01

    There is a paucity of in vivo studies that investigate the safety and efficacy of temporomandibular joint (TMJ) tissue regeneration approaches, in part due to the lack of established animal models. Review of disease models for study of TMJ is presented herein with an attempt to identify relevant preclinical animal models for TMJ tissue engineering, with emphasis on the disc and condyle. Although degenerative joint disease models have been mainly performed on mice, rats, and rabbits, preclinical regeneration approaches must employ larger animal species. There remains controversy regarding the preferred choice of larger animal models between the farm pig, minipig, goat, sheep, and dog. The advantages of the pig and minipig include their well characterized anatomy, physiology, and tissue properties. The advantages of the sheep and goat are their easier surgical access, low cost per animal, and its high tissue availability. The advantage of the dog is that the joint space is confined, so migration of interpositional devices should be less likely. However, each species has limitations as well. For example, the farm pig has continuous growth until about 18 months of age, and difficult surgical access due to the zygomatic arch covering the lateral aspect of joint. The minipig is not widely available and somewhat costly. The sheep and the goat are herbivores, and their TMJs mainly function in translation. The dog is a carnivore, and the TMJ is a hinge joint that can only rotate. Although no species provides the gold standard for all preclinical TMJ tissue engineering approaches, the goat and sheep have emerged as the leading options, with the minipig as the choice when cost is less of a limitation; and with the dog and farm pig serving as acceptable alternatives. Finally, naturally occurring TMJ disorders in domestic species may be harnessed on a preclinical trial basis as a clinically relevant platform for translation.

  6. Probability theory for 3-layer remote sensing radiative transfer model: univariate case.

    PubMed

    Ben-David, Avishai; Davidson, Charles E

    2012-04-23

    A probability model for a 3-layer radiative transfer model (foreground layer, cloud layer, background layer, and an external source at the end of line of sight) has been developed. The 3-layer model is fundamentally important as the primary physical model in passive infrared remote sensing. The probability model is described by the Johnson family of distributions that are used as a fit for theoretically computed moments of the radiative transfer model. From the Johnson family we use the SU distribution that can address a wide range of skewness and kurtosis values (in addition to addressing the first two moments, mean and variance). In the limit, SU can also describe lognormal and normal distributions. With the probability model one can evaluate the potential for detecting a target (vapor cloud layer), the probability of observing thermal contrast, and evaluate performance (receiver operating characteristics curves) in clutter-noise limited scenarios. This is (to our knowledge) the first probability model for the 3-layer remote sensing geometry that treats all parameters as random variables and includes higher-order statistics. © 2012 Optical Society of America

  7. Maximum voluntary joint torque as a function of joint angle and angular velocity: model development and application to the lower limb.

    PubMed

    Anderson, Dennis E; Madigan, Michael L; Nussbaum, Maury A

    2007-01-01

    Measurements of human strength can be important during analyses of physical activities. Such measurements have often taken the form of the maximum voluntary torque at a single joint angle and angular velocity. However, the available strength varies substantially with joint position and velocity. When examining dynamic activities, strength measurements should account for these variations. A model is presented of maximum voluntary joint torque as a function of joint angle and angular velocity. The model is based on well-known physiological relationships between muscle force and length and between muscle force and velocity and was tested by fitting it to maximum voluntary joint torque data from six different exertions in the lower limb. Isometric, concentric and eccentric maximum voluntary contractions were collected during hip extension, hip flexion, knee extension, knee flexion, ankle plantar flexion and dorsiflexion. Model parameters are reported for each of these exertion directions by gender and age group. This model provides an efficient method by which strength variations with joint angle and angular velocity may be incorporated into comparisons between joint torques calculated by inverse dynamics and the maximum available joint torques.

  8. Models based on value and probability in health improve shared decision making.

    PubMed

    Ortendahl, Monica

    2008-10-01

    Diagnostic reasoning and treatment decisions are a key competence of doctors. A model based on values and probability provides a conceptual framework for clinical judgments and decisions, and also facilitates the integration of clinical and biomedical knowledge into a diagnostic decision. Both value and probability are usually estimated values in clinical decision making. Therefore, model assumptions and parameter estimates should be continually assessed against data, and models should be revised accordingly. Introducing parameter estimates for both value and probability, which usually pertain in clinical work, gives the model labelled subjective expected utility. Estimated values and probabilities are involved sequentially for every step in the decision-making process. Introducing decision-analytic modelling gives a more complete picture of variables that influence the decisions carried out by the doctor and the patient. A model revised for perceived values and probabilities by both the doctor and the patient could be used as a tool for engaging in a mutual and shared decision-making process in clinical work.

  9. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.

  10. The Probable Explanation for the Low Friction of Natural Joints.

    PubMed

    Pawlak, Zenon; Urbaniak, Wieslaw; Hagner-Derengowska, Magdalena; Hagner, Wojciech

    2015-04-01

    The surface of an articular cartilage, coated with phospholipid (PL) bilayers, plays an important role in its lubrication and movement. Intact (normal) and depleted surfaces of the joint were modelled and the pH influence on the surface interfacial energy, wettability and friction were investigated. In the experiments, the deterioration of the PL bilayer was controlled by its wettability and the applied friction. The surrounding fluid of an undamaged articular cartilage, the synovial fluid, has a pH value of approximately 7.4. Buffer solutions were formulated to represent the synovial fluid with various pH values. It was found that the surface interfacial energy was stabilised at its lowest values when the pH varied between 6.5 and 9.5. These results suggested that as the PL bilayers deteriorated, the hydration repulsion mechanism became less effective as friction increased. The decreased number of bilayers changed the wettability and lowered PL lubricant properties.

  11. On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1995-01-01

    For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.

  12. Surrogate modeling of joint flood risk across coastal watersheds

    NASA Astrophysics Data System (ADS)

    Bass, Benjamin; Bedient, Philip

    2018-03-01

    This study discusses the development and performance of a rapid prediction system capable of representing the joint rainfall-runoff and storm surge flood response of tropical cyclones (TCs) for probabilistic risk analysis. Due to the computational demand required for accurately representing storm surge with the high-fidelity ADvanced CIRCulation (ADCIRC) hydrodynamic model and its coupling with additional numerical models to represent rainfall-runoff, a surrogate or statistical model was trained to represent the relationship between hurricane wind- and pressure-field characteristics and their peak joint flood response typically determined from physics based numerical models. This builds upon past studies that have only evaluated surrogate models for predicting peak surge, and provides the first system capable of probabilistically representing joint flood levels from TCs. The utility of this joint flood prediction system is then demonstrated by improving upon probabilistic TC flood risk products, which currently account for storm surge but do not take into account TC associated rainfall-runoff. Results demonstrate the source apportionment of rainfall-runoff versus storm surge and highlight that slight increases in flood risk levels may occur due to the interaction between rainfall-runoff and storm surge as compared to the Federal Emergency Management Association's (FEMAs) current practices.

  13. Gap probability - Measurements and models of a pecan orchard

    NASA Technical Reports Server (NTRS)

    Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI

    1992-01-01

    Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.

  14. A Probability Model for Belady's Anomaly

    ERIC Educational Resources Information Center

    McMaster, Kirby; Sambasivam, Samuel E.; Anderson, Nicole

    2010-01-01

    In demand paging virtual memory systems, the page fault rate of a process varies with the number of memory frames allocated to the process. When an increase in the number of allocated frames leads to an increase in the number of page faults, Belady's anomaly is said to occur. In this paper, we present a probability model for Belady's anomaly. We…

  15. Developing a probability-based model of aquifer vulnerability in an agricultural region

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei

    2013-04-01

    SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.

  16. Coupled Monte Carlo Probability Density Function/ SPRAY/CFD Code Developed for Modeling Gas-Turbine Combustor Flows

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The success of any solution methodology for studying gas-turbine combustor flows depends a great deal on how well it can model various complex, rate-controlling processes associated with turbulent transport, mixing, chemical kinetics, evaporation and spreading rates of the spray, convective and radiative heat transfer, and other phenomena. These phenomena often strongly interact with each other at disparate time and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. Turbulence manifests its influence in a diffusion flame in several forms depending on how turbulence interacts with various flame scales. These forms range from the so-called wrinkled, or stretched, flamelets regime, to the distributed combustion regime. Conventional turbulence closure models have difficulty in treating highly nonlinear reaction rates. A solution procedure based on the joint composition probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices such as extinction, blowoff limits, and emissions predictions because it can handle the nonlinear chemical reaction rates without any approximation. In this approach, mean and turbulence gas-phase velocity fields are determined from a standard turbulence model; the joint composition field of species and enthalpy are determined from the solution of a modeled PDF transport equation; and a Lagrangian-based dilute spray model is used for the liquid-phase representation with appropriate consideration of the exchanges of mass, momentum, and energy between the two phases. The PDF transport equation is solved by a Monte Carlo method, and existing state-of-the-art numerical representations are used to solve the mean gasphase velocity and turbulence fields together with the liquid-phase equations. The joint composition PDF

  17. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2002-01-01

    Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (∼90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.

  18. Equivalent dynamic model of DEMES rotary joint

    NASA Astrophysics Data System (ADS)

    Zhao, Jianwen; Wang, Shu; Xing, Zhiguang; McCoul, David; Niu, Junyang; Huang, Bo; Liu, Liwu; Leng, Jinsong

    2016-07-01

    The dielectric elastomer minimum energy structure (DEMES) can realize large angular deformations by a small voltage-induced strain of the dielectric elastomer (DE), so it is a suitable candidate to make a rotary joint for a soft robot. Dynamic analysis is necessary for some applications, but the dynamic response of DEMESs is difficult to model because of the complicated morphology and viscoelasticity of the DE film. In this paper, a method composed of theoretical analysis and experimental measurement is presented to model the dynamic response of a DEMES rotary joint under an alternating voltage. Based on measurements of equivalent driving force and damping of the DEMES, the model can be derived. Some experiments were carried out to validate the equivalent dynamic model. The maximum angle error between model and experiment is greater than ten degrees, but it is acceptable to predict angular velocity of the DEMES, therefore, it can be applied in feedforward-feedback compound control.

  19. Time‐dependent renewal‐model probabilities when date of last earthquake is unknown

    USGS Publications Warehouse

    Field, Edward H.; Jordan, Thomas H.

    2015-01-01

    We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.

  20. Normal tissue complication probability modelling of tissue fibrosis following breast radiotherapy

    NASA Astrophysics Data System (ADS)

    Alexander, M. A. R.; Brooks, W. A.; Blake, S. W.

    2007-04-01

    Cosmetic late effects of radiotherapy such as tissue fibrosis are increasingly regarded as being of importance. It is generally considered that the complication probability of a radiotherapy plan is dependent on the dose uniformity, and can be reduced by using better compensation to remove dose hotspots. This work aimed to model the effects of improved dose homogeneity on complication probability. The Lyman and relative seriality NTCP models were fitted to clinical fibrosis data for the breast collated from the literature. Breast outlines were obtained from a commercially available Rando phantom using the Osiris system. Multislice breast treatment plans were produced using a variety of compensation methods. Dose-volume histograms (DVHs) obtained for each treatment plan were reduced to simple numerical parameters using the equivalent uniform dose and effective volume DVH reduction methods. These parameters were input into the models to obtain complication probability predictions. The fitted model parameters were consistent with a parallel tissue architecture. Conventional clinical plans generally showed reducing complication probabilities with increasing compensation sophistication. Extremely homogenous plans representing idealized IMRT treatments showed increased complication probabilities compared to conventional planning methods, as a result of increased dose to areas receiving sub-prescription doses using conventional techniques.

  1. Modeling of Human Joint Structures.

    DTIC Science & Technology

    1982-09-01

    Acromioclavicular Joint .... ............. ... 20 Glenohumeral Joint .... ................ . 20 HIP JOINT .................. ...... 21 Iliofemoral Ligament...clavicle articulates with the manubrium of the sternum, and the acromioclavicular joint, where the clavicle articulates with the acromion process of the...the interclavicular ligament. Acromioclavicular Joint This articulation between the distal end of the clavicle and the acromion of the scapula is

  2. Joint estimation over multiple individuals improves behavioural state inference from animal movement data.

    PubMed

    Jonsen, Ian

    2016-02-08

    State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone.

  3. The human first carpometacarpal joint: osteoarthritic degeneration and 3-dimensional modeling.

    PubMed

    Kovler, Maksim; Lundon, Katie; McKee, Nancy; Agur, Anne

    2004-01-01

    The purpose of this study was to gain insight into potential mechanical factors contributing to osteoarthritis of the human first carpometacarpal joint (CMC). This was accomplished by creating three-dimensional (3-D) computer models of the articular surfaces of CMC joints of older humans and by determining their locus of cartilage degeneration. The research questions of this study were: 1) What is the articular wear pattern of cartilage degeneration in CMC osteoarthritis?, (2) Are there significant topographic differences in joint area and contour between the joints of males and females?, and 3) Are there measurable bony joint recesses consistently found within the joint? The articular surfaces of 25 embalmed cadaveric joints (from 13 cadavers) were graded for degree of osteoarthritis, and the location of degeneration was mapped using a dissection microscope. The surfaces of 14 mildly degenerated joints were digitized and reconstructed as 3-D computer models using the Microscribe 3D-X Digitizer and the Rhinoceros 2.0 NURBS Modeling Software. This technology provided accurate and reproducible information on joint area and topography. The dorsoradial trapezial region was found to be significantly more degenerated than other quadrants in both males and females. Mean trapezial articular surface area was 197 mm 2 in males and 160 mm(2) in females; the respective mean areas for the metacarpal were 239 mm(2) in males and 184 mm(2) in females. Joints of females were found to be significantly more concave in radioulnar profile than those of males. Three bony joint recesses were consistently found, two in the radial and ulnar aspects of the trapezium and the third in the palmar surface of the metacarpal.

  4. Knock probability estimation through an in-cylinder temperature model with exogenous noise

    NASA Astrophysics Data System (ADS)

    Bares, P.; Selmanaj, D.; Guardiola, C.; Onder, C.

    2018-01-01

    This paper presents a new knock model which combines a deterministic knock model based on the in-cylinder temperature and an exogenous noise disturbing this temperature. The autoignition of the end-gas is modelled by an Arrhenius-like function and the knock probability is estimated by propagating a virtual error probability distribution. Results show that the random nature of knock can be explained by uncertainties at the in-cylinder temperature estimation. The model only has one parameter for calibration and thus can be easily adapted online. In order to reduce the measurement uncertainties associated with the air mass flow sensor, the trapped mass is derived from the in-cylinder pressure resonance, which improves the knock probability estimation and reduces the number of sensors needed for the model. A four stroke SI engine was used for model validation. By varying the intake temperature, the engine speed, the injected fuel mass, and the spark advance, specific tests were conducted, which furnished data with various knock intensities and probabilities. The new model is able to predict the knock probability within a sufficient range at various operating conditions. The trapped mass obtained by the acoustical model was compared in steady conditions by using a fuel balance and a lambda sensor and differences below 1 % were found.

  5. Development and modeling of a more efficient frangible separation joint

    NASA Astrophysics Data System (ADS)

    Renfro, Steven L.; Harris, Gary N.; Olson, Steven L.

    1993-06-01

    A low-cost, robust, and contamination-free separation system for spacecraft or launch vehicle stage and fairing separation was developed, which includes a frangible joint to sever an aluminum extrusion and to control contamination. The installed joint uses a sealing manifold to provide redundant initiation transfer between Flexible Confined Detonating Cord assemblies and HNS-IA loaded cups on the ends of the HNS-IIA Mild Detonating Fuse. A shock matching model of the system was developed, and the margin of joint severance, contamination control of the system, and correlation of the model are demonstrated.

  6. Understanding the joint behavior of temperature and precipitation for climate change impact studies

    NASA Astrophysics Data System (ADS)

    Rana, Arun; Moradkhani, Hamid; Qin, Yueyue

    2017-07-01

    The multiple downscaled scenario products allow us to assess the uncertainty of the variations of precipitation and temperature in the current and future periods. Probabilistic assessments of both climatic variables help better understand the interdependence of the two and thus, in turn, help in assessing the future with confidence. In the present study, we use ensemble of statistically downscaled precipitation and temperature from various models. The dataset used is multi-model ensemble of 10 global climate models (GCMs) downscaled product from CMIP5 daily dataset using the Bias Correction and Spatial Downscaling (BCSD) technique, generated at Portland State University. The multi-model ensemble of both precipitation and temperature is evaluated for dry and wet periods for 10 sub-basins across Columbia River Basin (CRB). Thereafter, copula is applied to establish the joint distribution of two variables on multi-model ensemble data. The joint distribution is then used to estimate the change in trends of said variables in future, along with estimation of the probabilities of the given change. The joint distribution trends vary, but certainly positive, for dry and wet periods in sub-basins of CRB. Dry season, generally, is indicating a higher positive change in precipitation than temperature (as compared to historical) across sub-basins with wet season inferring otherwise. Probabilities of changes in future, as estimated from the joint distribution, indicate varied degrees and forms during dry season whereas the wet season is rather constant across all the sub-basins.

  7. Modelling of the Human Knee Joint Supported by Active Orthosis

    NASA Astrophysics Data System (ADS)

    Musalimov, V.; Monahov, Y.; Tamre, M.; Rõbak, D.; Sivitski, A.; Aryassov, G.; Penkov, I.

    2018-02-01

    The article discusses motion of a healthy knee joint in the sagittal plane and motion of an injured knee joint supported by an active orthosis. A kinematic scheme of a mechanism for the simulation of a knee joint motion is developed and motion of healthy and injured knee joints are modelled in Matlab. Angles between links, which simulate the femur and tibia are controlled by Simulink block of Model predictive control (MPC). The results of simulation have been compared with several samples of real motion of the human knee joint obtained from motion capture systems. On the basis of these analyses and also of the analysis of the forces in human lower limbs created at motion, an active smart orthosis is developed. The orthosis design was optimized to achieve an energy saving system with sufficient anatomy, necessary reliability, easy exploitation and low cost. With the orthosis it is possible to unload the knee joint, and also partially or fully compensate muscle forces required for the bending of the lower limb.

  8. Person-Fit Statistics for Joint Models for Accuracy and Speed

    ERIC Educational Resources Information Center

    Fox, Jean-Paul; Marianti, Sukaesi

    2017-01-01

    Response accuracy and response time data can be analyzed with a joint model to measure ability and speed of working, while accounting for relationships between item and person characteristics. In this study, person-fit statistics are proposed for joint models to detect aberrant response accuracy and/or response time patterns. The person-fit tests…

  9. Comparison of dynamic treatment regimes via inverse probability weighting.

    PubMed

    Hernán, Miguel A; Lanoy, Emilie; Costagliola, Dominique; Robins, James M

    2006-03-01

    Appropriate analysis of observational data is our best chance to obtain answers to many questions that involve dynamic treatment regimes. This paper describes a simple method to compare dynamic treatment regimes by artificially censoring subjects and then using inverse probability weighting (IPW) to adjust for any selection bias introduced by the artificial censoring. The basic strategy can be summarized in four steps: 1) define two regimes of interest, 2) artificially censor individuals when they stop following one of the regimes of interest, 3) estimate inverse probability weights to adjust for the potential selection bias introduced by censoring in the previous step, 4) compare the survival of the uncensored individuals under each regime of interest by fitting an inverse probability weighted Cox proportional hazards model with the dichotomous regime indicator and the baseline confounders as covariates. In the absence of model misspecification, the method is valid provided data are available on all time-varying and baseline joint predictors of survival and regime discontinuation. We present an application of the method to compare the AIDS-free survival under two dynamic treatment regimes in a large prospective study of HIV-infected patients. The paper concludes by discussing the relative advantages and disadvantages of censoring/IPW versus g-estimation of nested structural models to compare dynamic regimes.

  10. Bivariate normal, conditional and rectangular probabilities: A computer program with applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.

    1980-01-01

    Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.

  11. The Probability Heuristics Model of Syllogistic Reasoning.

    ERIC Educational Resources Information Center

    Chater, Nick; Oaksford, Mike

    1999-01-01

    Proposes a probability heuristic model for syllogistic reasoning and confirms the rationality of this heuristic by an analysis of the probabilistic validity of syllogistic reasoning that treats logical inference as a limiting case of probabilistic inference. Meta-analysis and two experiments involving 40 adult participants and using generalized…

  12. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  13. A simplified model for the assessment of the impact probability of fragments.

    PubMed

    Gubinelli, Gianfilippo; Zanelli, Severino; Cozzani, Valerio

    2004-12-31

    A model was developed for the assessment of fragment impact probability on a target vessel, following the collapse and fragmentation of a primary vessel due to internal pressure. The model provides the probability of impact of a fragment with defined shape, mass and initial velocity on a target of a known shape and at a given position with respect to the source point. The model is based on the ballistic analysis of the fragment trajectory and on the determination of impact probabilities by the analysis of initial direction of fragment flight. The model was validated using available literature data.

  14. Joint kinematic calculation based on clinical direct kinematic versus inverse kinematic gait models.

    PubMed

    Kainz, H; Modenese, L; Lloyd, D G; Maine, S; Walsh, H P J; Carty, C P

    2016-06-14

    Most clinical gait laboratories use the conventional gait analysis model. This model uses a computational method called Direct Kinematics (DK) to calculate joint kinematics. In contrast, musculoskeletal modelling approaches use Inverse Kinematics (IK) to obtain joint angles. IK allows additional analysis (e.g. muscle-tendon length estimates), which may provide valuable information for clinical decision-making in people with movement disorders. The twofold aims of the current study were: (1) to compare joint kinematics obtained by a clinical DK model (Vicon Plug-in-Gait) with those produced by a widely used IK model (available with the OpenSim distribution), and (2) to evaluate the difference in joint kinematics that can be solely attributed to the different computational methods (DK versus IK), anatomical models and marker sets by using MRI based models. Eight children with cerebral palsy were recruited and presented for gait and MRI data collection sessions. Differences in joint kinematics up to 13° were found between the Plug-in-Gait and the gait 2392 OpenSim model. The majority of these differences (94.4%) were attributed to differences in the anatomical models, which included different anatomical segment frames and joint constraints. Different computational methods (DK versus IK) were responsible for only 2.7% of the differences. We recommend using the same anatomical model for kinematic and musculoskeletal analysis to ensure consistency between the obtained joint angles and musculoskeletal estimates. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. An algorithm for continuum modeling of rocks with multiple embedded nonlinearly-compliant joints [Continuum modeling of elasto-plastic media with multiple embedded nonlinearly-compliant joints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurley, R. C.; Vorobiev, O. Y.; Ezzedine, S. M.

    Here, we present a numerical method for modeling the mechanical effects of nonlinearly-compliant joints in elasto-plastic media. The method uses a series of strain-rate and stress update algorithms to determine joint closure, slip, and solid stress within computational cells containing multiple “embedded” joints. This work facilitates efficient modeling of nonlinear wave propagation in large spatial domains containing a large number of joints that affect bulk mechanical properties. We implement the method within the massively parallel Lagrangian code GEODYN-L and provide verification and examples. We highlight the ability of our algorithms to capture joint interactions and multiple weakness planes within individualmore » computational cells, as well as its computational efficiency. We also discuss the motivation for developing the proposed technique: to simulate large-scale wave propagation during the Source Physics Experiments (SPE), a series of underground explosions conducted at the Nevada National Security Site (NNSS).« less

  16. An algorithm for continuum modeling of rocks with multiple embedded nonlinearly-compliant joints [Continuum modeling of elasto-plastic media with multiple embedded nonlinearly-compliant joints

    DOE PAGES

    Hurley, R. C.; Vorobiev, O. Y.; Ezzedine, S. M.

    2017-04-06

    Here, we present a numerical method for modeling the mechanical effects of nonlinearly-compliant joints in elasto-plastic media. The method uses a series of strain-rate and stress update algorithms to determine joint closure, slip, and solid stress within computational cells containing multiple “embedded” joints. This work facilitates efficient modeling of nonlinear wave propagation in large spatial domains containing a large number of joints that affect bulk mechanical properties. We implement the method within the massively parallel Lagrangian code GEODYN-L and provide verification and examples. We highlight the ability of our algorithms to capture joint interactions and multiple weakness planes within individualmore » computational cells, as well as its computational efficiency. We also discuss the motivation for developing the proposed technique: to simulate large-scale wave propagation during the Source Physics Experiments (SPE), a series of underground explosions conducted at the Nevada National Security Site (NNSS).« less

  17. The impact of joint responses of devices in an airport security system.

    PubMed

    Nie, Xiaofeng; Batta, Rajan; Drury, Colin G; Lin, Li

    2009-02-01

    In this article, we consider a model for an airport security system in which the declaration of a threat is based on the joint responses of inspection devices. This is in contrast to the typical system in which each check station independently declares a passenger as having a threat or not having a threat. In our framework the declaration of threat/no-threat is based upon the passenger scores at the check stations he/she goes through. To do this we use concepts from classification theory in the field of multivariate statistics analysis and focus on the main objective of minimizing the expected cost of misclassification. The corresponding correct classification and misclassification probabilities can be obtained by using a simulation-based method. After computing the overall false alarm and false clear probabilities, we compare our joint response system with two other independently operated systems. A model that groups passengers in a manner that minimizes the false alarm probability while maintaining the false clear probability within specifications set by a security authority is considered. We also analyze the staffing needs at each check station for such an inspection scheme. An illustrative example is provided along with sensitivity analysis on key model parameters. A discussion is provided on some implementation issues, on the various assumptions made in the analysis, and on potential drawbacks of the approach.

  18. A joint latent class model for classifying severely hemorrhaging trauma patients.

    PubMed

    Rahbar, Mohammad H; Ning, Jing; Choi, Sangbum; Piao, Jin; Hong, Chuan; Huang, Hanwen; Del Junco, Deborah J; Fox, Erin E; Rahbar, Elaheh; Holcomb, John B

    2015-10-24

    In trauma research, "massive transfusion" (MT), historically defined as receiving ≥10 units of red blood cells (RBCs) within 24 h of admission, has been routinely used as a "gold standard" for quantifying bleeding severity. Due to early in-hospital mortality, however, MT is subject to survivor bias and thus a poorly defined criterion to classify bleeding trauma patients. Using the data from a retrospective trauma transfusion study, we applied a latent-class (LC) mixture model to identify severely hemorrhaging (SH) patients. Based on the joint distribution of cumulative units of RBCs and binary survival outcome at 24 h of admission, we applied an expectation-maximization (EM) algorithm to obtain model parameters. Estimated posterior probabilities were used for patients' classification and compared with the MT rule. To evaluate predictive performance of the LC-based classification, we examined the role of six clinical variables as predictors using two separate logistic regression models. Out of 471 trauma patients, 211 (45 %) were MT, while our latent SH classifier identified only 127 (27 %) of patients as SH. The agreement between the two classification methods was 73 %. A non-ignorable portion of patients (17 out of 68, 25 %) who died within 24 h were not classified as MT but the SH group included 62 patients (91 %) who died during the same period. Our comparison of the predictive models based on MT and SH revealed significant differences between the coefficients of potential predictors of patients who may be in need of activation of the massive transfusion protocol. The traditional MT classification does not adequately reflect transfusion practices and outcomes during the trauma reception and initial resuscitation phase. Although we have demonstrated that joint latent class modeling could be used to correct for potential bias caused by misclassification of severely bleeding patients, improvement in this approach could be made in the presence of time to event

  19. Principal Component Analysis in Construction of 3D Human Knee Joint Models Using a Statistical Shape Model Method

    PubMed Central

    Tsai, Tsung-Yuan; Li, Jing-Sheng; Wang, Shaobai; Li, Pingyue; Kwon, Young-Min; Li, Guoan

    2013-01-01

    The statistical shape model (SSM) method that uses 2D images of the knee joint to predict the 3D joint surface model has been reported in literature. In this study, we constructed a SSM database using 152 human CT knee joint models, including the femur, tibia and patella and analyzed the characteristics of each principal component of the SSM. The surface models of two in vivo knees were predicted using the SSM and their 2D bi-plane fluoroscopic images. The predicted models were compared to their CT joint models. The differences between the predicted 3D knee joint surfaces and the CT image-based surfaces were 0.30 ± 0.81 mm, 0.34 ± 0.79 mm and 0.36 ± 0.59 mm for the femur, tibia and patella, respectively (average ± standard deviation). The computational time for each bone of the knee joint was within 30 seconds using a personal computer. The analysis of this study indicated that the SSM method could be a useful tool to construct 3D surface models of the knee with sub-millimeter accuracy in real time. Thus it may have a broad application in computer assisted knee surgeries that require 3D surface models of the knee. PMID:24156375

  20. Probability model for analyzing fire management alternatives: theory and structure

    Treesearch

    Frederick W. Bratten

    1982-01-01

    A theoretical probability model has been developed for analyzing program alternatives in fire management. It includes submodels or modules for predicting probabilities of fire behavior, fire occurrence, fire suppression, effects of fire on land resources, and financial effects of fire. Generalized "fire management situations" are used to represent actual fire...

  1. Numerical built-in method for the nonlinear JRC/JCS model in rock joint.

    PubMed

    Liu, Qunyi; Xing, Wanli; Li, Ying

    2014-01-01

    The joint surface is widely distributed in the rock, thus leading to the nonlinear characteristics of rock mass strength and limiting the effectiveness of the linear model in reflecting characteristics. The JRC/JCS model is the nonlinear failure criterion and generally believed to describe the characteristics of joints better than other models. In order to develop the numerical program for JRC/JCS model, this paper established the relationship between the parameters of the JRC/JCS and Mohr-Coulomb models. Thereafter, the numerical implement method and implementation process of the JRC/JCS model were discussed and the reliability of the numerical method was verified by the shear tests of jointed rock mass. Finally, the effect of the JRC/JCS model parameters on the shear strength of the joint was analyzed.

  2. Fixation probability in a two-locus intersexual selection model.

    PubMed

    Durand, Guillermo; Lessard, Sabin

    2016-06-01

    We study a two-locus model of intersexual selection in a finite haploid population reproducing according to a discrete-time Moran model with a trait locus expressed in males and a preference locus expressed in females. We show that the probability of ultimate fixation of a single mutant allele for a male ornament introduced at random at the trait locus given any initial frequency state at the preference locus is increased by weak intersexual selection and recombination, weak or strong. Moreover, this probability exceeds the initial frequency of the mutant allele even in the case of a costly male ornament if intersexual selection is not too weak. On the other hand, the probability of ultimate fixation of a single mutant allele for a female preference towards a male ornament introduced at random at the preference locus is increased by weak intersexual selection and weak recombination if the female preference is not costly, and is strong enough in the case of a costly male ornament. The analysis relies on an extension of the ancestral recombination-selection graph for samples of haplotypes to take into account events of intersexual selection, while the symbolic calculation of the fixation probabilities is made possible in a reasonable time by an optimizing algorithm. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Modeling the rejection probability in plant imports.

    PubMed

    Surkov, I V; van der Werf, W; van Kooten, O; Lansink, A G J M Oude

    2008-06-01

    Phytosanitary inspection of imported plants and flowers is a major means for preventing pest invasions through international trade, but in a majority of countries availability of resources prevents inspection of all imports. Prediction of the likelihood of pest infestation in imported shipments could help maximize the efficiency of inspection by targeting inspection on shipments with the highest likelihood of infestation. This paper applies a multinomial logistic (MNL) regression model to data on import inspections of ornamental plant commodities in the Netherlands from 1998 to 2001 to investigate whether it is possible to predict the probability that a shipment will be (i) accepted for import, (ii) rejected for import because of detected pests, or (iii) rejected due to other reasons. Four models were estimated: (i) an all-species model, including all plant imports (136,251 shipments) in the data set, (ii) a four-species model, including records on the four ornamental commodities that accounted for 28.9% of inspected and 49.5% of rejected shipments, and two models for single commodities with large import volumes and percentages of rejections, (iii) Dianthus (16.9% of inspected and 46.3% of rejected shipments), and (iv) Chrysanthemum (6.9 and 8.6%, respectively). All models were highly significant (P < 0.001). The models for Dianthus and Chrysanthemum and for the set of four ornamental commodities showed a better fit to data than the model for all ornamental commodities. Variables that characterized the imported shipment's region of origin, the shipment's size, the company that imported the shipment, and season and year of import, were significant in most of the estimated models. The combined results of this study suggest that the MNL model can be a useful tool for modeling the probability of rejecting imported commodities even with a small set of explanatory variables. The MNL model can be helpful in better targeting of resources for import inspection. The

  4. Quantum probability assignment limited by relativistic causality.

    PubMed

    Han, Yeong Deok; Choi, Taeseung

    2016-03-14

    Quantum theory has nonlocal correlations, which bothered Einstein, but found to satisfy relativistic causality. Correlation for a shared quantum state manifests itself, in the standard quantum framework, by joint probability distributions that can be obtained by applying state reduction and probability assignment that is called Born rule. Quantum correlations, which show nonlocality when the shared state has an entanglement, can be changed if we apply different probability assignment rule. As a result, the amount of nonlocality in quantum correlation will be changed. The issue is whether the change of the rule of quantum probability assignment breaks relativistic causality. We have shown that Born rule on quantum measurement is derived by requiring relativistic causality condition. This shows how the relativistic causality limits the upper bound of quantum nonlocality through quantum probability assignment.

  5. Joint Analysis of Binomial and Continuous Traits with a Recursive Model: A Case Study Using Mortality and Litter Size of Pigs

    PubMed Central

    Varona, Luis; Sorensen, Daniel

    2014-01-01

    This work presents a model for the joint analysis of a binomial and a Gaussian trait using a recursive parametrization that leads to a computationally efficient implementation. The model is illustrated in an analysis of mortality and litter size in two breeds of Danish pigs, Landrace and Yorkshire. Available evidence suggests that mortality of piglets increased partly as a result of successful selection for total number of piglets born. In recent years there has been a need to decrease the incidence of mortality in pig-breeding programs. We report estimates of genetic variation at the level of the logit of the probability of mortality and quantify how it is affected by the size of the litter. Several models for mortality are considered and the best fits are obtained by postulating linear and cubic relationships between the logit of the probability of mortality and litter size, for Landrace and Yorkshire, respectively. An interpretation of how the presence of genetic variation affects the probability of mortality in the population is provided and we discuss and quantify the prospects of selecting for reduced mortality, without affecting litter size. PMID:24414548

  6. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  7. Principal component analysis in construction of 3D human knee joint models using a statistical shape model method.

    PubMed

    Tsai, Tsung-Yuan; Li, Jing-Sheng; Wang, Shaobai; Li, Pingyue; Kwon, Young-Min; Li, Guoan

    2015-01-01

    The statistical shape model (SSM) method that uses 2D images of the knee joint to predict the three-dimensional (3D) joint surface model has been reported in the literature. In this study, we constructed a SSM database using 152 human computed tomography (CT) knee joint models, including the femur, tibia and patella and analysed the characteristics of each principal component of the SSM. The surface models of two in vivo knees were predicted using the SSM and their 2D bi-plane fluoroscopic images. The predicted models were compared to their CT joint models. The differences between the predicted 3D knee joint surfaces and the CT image-based surfaces were 0.30 ± 0.81 mm, 0.34 ± 0.79 mm and 0.36 ± 0.59 mm for the femur, tibia and patella, respectively (average ± standard deviation). The computational time for each bone of the knee joint was within 30 s using a personal computer. The analysis of this study indicated that the SSM method could be a useful tool to construct 3D surface models of the knee with sub-millimeter accuracy in real time. Thus, it may have a broad application in computer-assisted knee surgeries that require 3D surface models of the knee.

  8. Joint modelling of annual maximum drought severity and corresponding duration

    NASA Astrophysics Data System (ADS)

    Tosunoglu, Fatih; Kisi, Ozgur

    2016-12-01

    In recent years, the joint distribution properties of drought characteristics (e.g. severity, duration and intensity) have been widely evaluated using copulas. However, history of copulas in modelling drought characteristics obtained from streamflow data is still short, especially in semi-arid regions, such as Turkey. In this study, unlike previous studies, drought events are characterized by annual maximum severity (AMS) and corresponding duration (CD) which are extracted from daily streamflow of the seven gauge stations located in Çoruh Basin, Turkey. On evaluation of the various univariate distributions, the Exponential, Weibull and Logistic distributions are identified as marginal distributions for the AMS and CD series. Archimedean copulas, namely Ali-Mikhail-Haq, Clayton, Frank and Gumbel-Hougaard, are then employed to model joint distribution of the AMS and CD series. With respect to the Anderson Darling and Cramér-von Mises statistical tests and the tail dependence assessment, Gumbel-Hougaard copula is identified as the most suitable model for joint modelling of the AMS and CD series at each station. Furthermore, the developed Gumbel-Hougaard copulas are used to derive the conditional and joint return periods of the AMS and CD series which can be useful for designing and management of reservoirs in the basin.

  9. Shoulder model validation and joint contact forces during wheelchair activities.

    PubMed

    Morrow, Melissa M B; Kaufman, Kenton R; An, Kai-Nan

    2010-09-17

    Chronic shoulder impingement is a common problem for manual wheelchair users. The loading associated with performing manual wheelchair activities of daily living is substantial and often at a high frequency. Musculoskeletal modeling and optimization techniques can be used to estimate the joint contact forces occurring at the shoulder to assess the soft tissue loading during an activity and to possibly identify activities and strategies that place manual wheelchair users at risk for shoulder injuries. The purpose of this study was to validate an upper extremity musculoskeletal model and apply the model to wheelchair activities for analysis of the estimated joint contact forces. Upper extremity kinematics and handrim wheelchair kinetics were measured over three conditions: level propulsion, ramp propulsion, and a weight relief lift. The experimental data were used as input to a subject-specific musculoskeletal model utilizing optimization to predict joint contact forces of the shoulder during all conditions. The model was validated using a mean absolute error calculation. Model results confirmed that ramp propulsion and weight relief lifts place the shoulder under significantly higher joint contact loading than level propulsion. In addition, they exhibit large superior contact forces that could contribute to impingement. This study highlights the potential impingement risk associated with both the ramp and weight relief lift activities. Level propulsion was shown to have a low relative risk of causing injury, but with consideration of the frequency with which propulsion is performed, this observation is not conclusive.

  10. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  11. Probability Weighting Functions Derived from Hyperbolic Time Discounting: Psychophysical Models and Their Individual Level Testing.

    PubMed

    Takemura, Kazuhisa; Murakami, Hajime

    2016-01-01

    A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.

  12. An Open-Source Toolbox for Surrogate Modeling of Joint Contact Mechanics

    PubMed Central

    Eskinazi, Ilan

    2016-01-01

    Goal Incorporation of elastic joint contact models into simulations of human movement could facilitate studying the interactions between muscles, ligaments, and bones. Unfortunately, elastic joint contact models are often too expensive computationally to be used within iterative simulation frameworks. This limitation can be overcome by using fast and accurate surrogate contact models that fit or interpolate input-output data sampled from existing elastic contact models. However, construction of surrogate contact models remains an arduous task. The aim of this paper is to introduce an open-source program called Surrogate Contact Modeling Toolbox (SCMT) that facilitates surrogate contact model creation, evaluation, and use. Methods SCMT interacts with the third party software FEBio to perform elastic contact analyses of finite element models and uses Matlab to train neural networks that fit the input-output contact data. SCMT features sample point generation for multiple domains, automated sampling, sample point filtering, and surrogate model training and testing. Results An overview of the software is presented along with two example applications. The first example demonstrates creation of surrogate contact models of artificial tibiofemoral and patellofemoral joints and evaluates their computational speed and accuracy, while the second demonstrates the use of surrogate contact models in a forward dynamic simulation of an open-chain leg extension-flexion motion. Conclusion SCMT facilitates the creation of computationally fast and accurate surrogate contact models. Additionally, it serves as a bridge between FEBio and OpenSim musculoskeletal modeling software. Significance Researchers may now create and deploy surrogate models of elastic joint contact with minimal effort. PMID:26186761

  13. Joint Typhoon Warning Center (JTWC92) Model.

    DTIC Science & Technology

    1992-05-01

    Report Date. 13. Report Type and Dates Covered. I May 1992 IFinal - Contractor Report 4. Title and Subtitle. 5. FL iding Numbers. Final Report Joint...National Hurricane Center, Coral Gables, FL , 44 pp. I Neumann, C.J. and C.J. McAdie, 1991: A Revised National Hurricane Center NHC83 Model NHC90. NOAA...STATISTICAL-DYNAMICAL MODELS: HISTORICAL PERSPECTIVE The earliest known attempt at statistical-dynamical modeling is credi- ted to Veigas , (1966) for

  14. Independent Events in Elementary Probability Theory

    ERIC Educational Resources Information Center

    Csenki, Attila

    2011-01-01

    In Probability and Statistics taught to mathematicians as a first introduction or to a non-mathematical audience, joint independence of events is introduced by requiring that the multiplication rule is satisfied. The following statement is usually tacitly assumed to hold (and, at best, intuitively motivated): If the n events E[subscript 1],…

  15. Birth/birth-death processes and their computable transition probabilities with biological applications.

    PubMed

    Ho, Lam Si Tung; Xu, Jason; Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2018-03-01

    Birth-death processes track the size of a univariate population, but many biological systems involve interaction between populations, necessitating models for two or more populations simultaneously. A lack of efficient methods for evaluating finite-time transition probabilities of bivariate processes, however, has restricted statistical inference in these models. Researchers rely on computationally expensive methods such as matrix exponentiation or Monte Carlo approximation, restricting likelihood-based inference to small systems, or indirect methods such as approximate Bayesian computation. In this paper, we introduce the birth/birth-death process, a tractable bivariate extension of the birth-death process, where rates are allowed to be nonlinear. We develop an efficient algorithm to calculate its transition probabilities using a continued fraction representation of their Laplace transforms. Next, we identify several exemplary models arising in molecular epidemiology, macro-parasite evolution, and infectious disease modeling that fall within this class, and demonstrate advantages of our proposed method over existing approaches to inference in these models. Notably, the ubiquitous stochastic susceptible-infectious-removed (SIR) model falls within this class, and we emphasize that computable transition probabilities newly enable direct inference of parameters in the SIR model. We also propose a very fast method for approximating the transition probabilities under the SIR model via a novel branching process simplification, and compare it to the continued fraction representation method with application to the 17th century plague in Eyam. Although the two methods produce similar maximum a posteriori estimates, the branching process approximation fails to capture the correlation structure in the joint posterior distribution.

  16. Anand constitutive model of lead-free solder joints in 3D IC device

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Liu, Zhi-quan; Ji, Yu-tong

    2016-08-01

    Anand constitutive relation of SnAgCu and SnAgCu-nano Al solders were studied under uniaxial tension, and the constitutive model was used in the finite element simulation to analyze the stress-strain response of lead-free solder joints in 3D IC devices. The results showed that the nine parameters of the Anand model can be determined from separated constitutive relations and experimental results. Based on Anand model, the finite element method was selected to calculate the stress-strain response of lead-free solder joints, it was found that in the 3D IC device the maximum stress-strain concentrated in the concern solder joints, the stress-strain of SnAgCu-nano Al solder joints was lower than that of SnAgCu solder joints, which represented that the addition of nano Al particles can enhance the reliability of lead-free solder joints in 3D IC devices.

  17. How might Model-based Probabilities Extracted from Imperfect Models Guide Rational Decisions: The Case for non-probabilistic odds

    NASA Astrophysics Data System (ADS)

    Smith, Leonard A.

    2010-05-01

    This contribution concerns "deep" or "second-order" uncertainty, such as the uncertainty in our probability forecasts themselves. It asks the question: "Is it rational to take (or offer) bets using model-based probabilities as if they were objective probabilities?" If not, what alternative approaches for determining odds, perhaps non-probabilistic odds, might prove useful in practice, given the fact we know our models are imperfect? We consider the case where the aim is to provide sustainable odds: not to produce a profit but merely to rationally expect to break even in the long run. In other words, to run a quantified risk of ruin that is relatively small. Thus the cooperative insurance schemes of coastal villages provide a more appropriate parallel than a casino. A "better" probability forecast would lead to lower premiums charged and less volatile fluctuations in the cash reserves of the village. Note that the Bayesian paradigm does not constrain one to interpret model distributions as subjective probabilities, unless one believes the model to be empirically adequate for the task at hand. In geophysics, this is rarely the case. When a probability forecast is interpreted as the objective probability of an event, the odds on that event can be easily computed as one divided by the probability of the event, and one need not favour taking either side of the wager. (Here we are using "odds-for" not "odds-to", the difference being whether of not the stake is returned; odds of one to one are equivalent to odds of two for one.) The critical question is how to compute sustainable odds based on information from imperfect models. We suggest that this breaks the symmetry between the odds-on an event and the odds-against it. While a probability distribution can always be translated into odds, interpreting the odds on a set of events might result in "implied-probabilities" that sum to more than one. And/or the set of odds may be incomplete, not covering all events. We ask

  18. Improving Conceptual Models Using AEM Data and Probability Distributions

    NASA Astrophysics Data System (ADS)

    Davis, A. C.; Munday, T. J.; Christensen, N. B.

    2012-12-01

    With emphasis being placed on uncertainty in groundwater modelling and prediction, coupled with questions concerning the value of geophysical methods in hydrogeology, it is important to ask meaningful questions of hydrogeophysical data and inversion results. For example, to characterise aquifers using electromagnetic (EM) data, we ask questions such as "Given that the electrical conductivity of aquifer 'A' is less than x, where is that aquifer elsewhere in the survey area?" The answer may be given by examining inversion models, selecting locations and layers that satisfy the condition 'conductivity <= x', and labelling them as aquifer 'A'. One difficulty with this approach is that the inversion model result often be considered to be the only model for the data. In reality it is just one image of the subsurface that, given the method and the regularisation imposed in the inversion, agrees with measured data within a given error bound. We have no idea whether the final model realised by the inversion satisfies the global minimum error, or whether it is simply in a local minimum. There is a distribution of inversion models that satisfy the error tolerance condition: the final model is not the only one, nor is it necessarily the correct one. AEM inversions are often linearised in the calculation of the parameter sensitivity: we rely on the second derivatives in the Taylor expansion, thus the minimum model has all layer parameters distributed about their mean parameter value with well-defined variance. We investigate the validity of the minimum model, and its uncertainty, by examining the full posterior covariance matrix. We ask questions of the minimum model, and answer them in a probabilistically. The simplest question we can pose is "What is the probability that all layer resistivity values are <= a cut-off value?" We can calculate through use of the erf or the erfc functions. The covariance values of the inversion become marginalised in the integration: only the

  19. Thrust Force Analysis of Tripod Constant Velocity Joint Using Multibody Model

    NASA Astrophysics Data System (ADS)

    Sugiura, Hideki; Matsunaga, Tsugiharu; Mizutani, Yoshiteru; Ando, Yosei; Kashiwagi, Isashi

    A tripod constant velocity joint is used in the driveshaft of front wheel drive vehicles. Thrust force generated by this joint causes lateral vibration in these vehicles. To analyze the thrust force, a detailed model is constructed based on a multibody dynamics approach. This model includes all principal parts of the joint defined as rigid bodies and all force elements of contact and friction acting among these parts. This model utilizes a new contact modeling method of needle roller bearings for more precise and faster computation. By comparing computational and experimental results, the appropriateness of this model is verified and the principal factors inducing the second and third rotating order components of the thrust force are clarified. This paper also describes the influence of skewed needle rollers on the thrust force and evaluates the contribution of friction forces at each contact region to the thrust force.

  20. Neuromuscular interfacing: establishing an EMG-driven model for the human elbow joint.

    PubMed

    Pau, James W L; Xie, Shane S Q; Pullan, Andrew J

    2012-09-01

    Assistive devices aim to mitigate the effects of physical disability by aiding users to move their limbs or by rehabilitating through therapy. These devices are commonly embodied by robotic or exoskeletal systems that are still in development and use the electromyographic (EMG) signal to determine user intent. Not much focus has been placed on developing a neuromuscular interface (NI) that solely relies on the EMG signal, and does not require modifications to the end user's state to enhance the signal (such as adding weights). This paper presents the development of a flexible, physiological model for the elbow joint that is leading toward the implementation of an NI, which predicts joint motion from EMG signals for both able-bodied and less-abled users. The approach uses musculotendon models to determine muscle contraction forces, a proposed musculoskeletal model to determine total joint torque, and a kinematic model to determine joint rotational kinematics. After a sensitivity analysis and tuning using genetic algorithms, subject trials yielded an average root-mean-square error of 6.53° and 22.4° for a single cycle and random cycles of movement of the elbow joint, respectively. This helps us to validate the elbow model and paves the way toward the development of an NI.

  1. Medicare Program; Comprehensive Care for Joint Replacement Payment Model for Acute Care Hospitals Furnishing Lower Extremity Joint Replacement Services. Final rule.

    PubMed

    2015-11-24

    This final rule implements a new Medicare Part A and B payment model under section 1115A of the Social Security Act, called the Comprehensive Care for Joint Replacement (CJR) model, in which acute care hospitals in certain selected geographic areas will receive retrospective bundled payments for episodes of care for lower extremity joint replacement (LEJR) or reattachment of a lower extremity. All related care within 90 days of hospital discharge from the joint replacement procedure will be included in the episode of care. We believe this model will further our goals in improving the efficiency and quality of care for Medicare beneficiaries with these common medical procedures.

  2. Performance of two predictive uncertainty estimation approaches for conceptual Rainfall-Runoff Model: Bayesian Joint Inference and Hydrologic Uncertainty Post-processing

    NASA Astrophysics Data System (ADS)

    Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix

    2017-04-01

    It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter

  3. Probability judgments under ambiguity and conflict

    PubMed Central

    Smithson, Michael

    2015-01-01

    Whether conflict and ambiguity are distinct kinds of uncertainty remains an open question, as does their joint impact on judgments of overall uncertainty. This paper reviews recent advances in our understanding of human judgment and decision making when both ambiguity and conflict are present, and presents two types of testable models of judgments under conflict and ambiguity. The first type concerns estimate-pooling to arrive at “best” probability estimates. The second type is models of subjective assessments of conflict and ambiguity. These models are developed for dealing with both described and experienced information. A framework for testing these models in the described-information setting is presented, including a reanalysis of a multi-nation data-set to test best-estimate models, and a study of participants' assessments of conflict, ambiguity, and overall uncertainty reported by Smithson (2013). A framework for research in the experienced-information setting is then developed, that differs substantially from extant paradigms in the literature. This framework yields new models of “best” estimates and perceived conflict. The paper concludes with specific suggestions for future research on judgment and decision making under conflict and ambiguity. PMID:26042081

  4. Probability judgments under ambiguity and conflict.

    PubMed

    Smithson, Michael

    2015-01-01

    Whether conflict and ambiguity are distinct kinds of uncertainty remains an open question, as does their joint impact on judgments of overall uncertainty. This paper reviews recent advances in our understanding of human judgment and decision making when both ambiguity and conflict are present, and presents two types of testable models of judgments under conflict and ambiguity. The first type concerns estimate-pooling to arrive at "best" probability estimates. The second type is models of subjective assessments of conflict and ambiguity. These models are developed for dealing with both described and experienced information. A framework for testing these models in the described-information setting is presented, including a reanalysis of a multi-nation data-set to test best-estimate models, and a study of participants' assessments of conflict, ambiguity, and overall uncertainty reported by Smithson (2013). A framework for research in the experienced-information setting is then developed, that differs substantially from extant paradigms in the literature. This framework yields new models of "best" estimates and perceived conflict. The paper concludes with specific suggestions for future research on judgment and decision making under conflict and ambiguity.

  5. Reduced-order modeling approach for frictional stick-slip behaviors of joint interface

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Xu, Chao; Fan, Xuanhua; Wan, Qiang

    2018-03-01

    The complex frictional stick-slip behaviors of mechanical joint interface have a great effect on the dynamic properties of assembled structures. In this paper, a reduced-order modeling approach based on the constitutive Iwan model is proposed to describe the stick-slip behaviors of joint interface. An improved Iwan model is developed to describe the non-zero residual stiffness at macro-slip regime and smooth transition of joint stiffness from micro-slip to macro-slip regime, and the power-law relationship of energy dissipation during the micro-slip regime. In allusion to these nonlinear behaviors, the finite element method is used to calculate the recycle force under monolithic loading and the energy dissipation per cycle under oscillatory loading. The proposed model is then used to predict the nonlinear stick-slip behaviors of joint interface by curve-fitting to the results of finite element analysis, and the results show good agreements with the finite element analysis. A comparison with the experiment results in literature is also made. The proposed model agrees very well with the experiment results.

  6. Nonstationary envelope process and first excursion probability.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.

  7. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling

    PubMed Central

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323

  8. Quantitative metal magnetic memory reliability modeling for welded joints

    NASA Astrophysics Data System (ADS)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  9. [Estimating survival of thrushes: modeling capture-recapture probabilities].

    PubMed

    Burskiî, O V

    2011-01-01

    The stochastic modeling technique serves as a way to correctly separate "return rate" of marked animals into survival rate (phi) and capture probability (p). The method can readily be used with the program MARK freely distributed through Internet (Cooch, White, 2009). Input data for the program consist of "capture histories" of marked animals--strings of units and zeros indicating presence or absence of the individual among captures (or sightings) along the set of consequent recapture occasions (e.g., years). Probability of any history is a product of binomial probabilities phi, p or their complements (1 - phi) and (1 - p) for each year of observation over the individual. Assigning certain values to parameters phi and p, one can predict the composition of all individual histories in the sample and assess the likelihood of the prediction. The survival parameters for different occasions and cohorts of individuals can be set either equal or different, as well as recapture parameters can be set in different ways. There is a possibility to constraint the parameters, according to the hypothesis being tested, in the form of a specific model. Within the specified constraints, the program searches for parameter values that describe the observed composition of histories with the maximum likelihood. It computes the parameter estimates along with confidence limits and the overall model likelihood. There is a set of tools for testing the model goodness-of-fit under assumption of equality of survival rates among individuals and independence of their fates. Other tools offer a proper selection among a possible variety of models, providing the best parity between details and precision in describing reality. The method was applied to 20-yr recapture and resighting data series on 4 thrush species (genera Turdus, Zoothera) breeding in the Yenisei River floodplain within the middle taiga subzone. The capture probabilities were quite independent of observational efforts fluctuations

  10. A Causal Model for Joint Evaluation of Placebo and Treatment-Specific Effects in Clinical Trials

    PubMed Central

    Zhang, Zhiwei; Kotz, Richard M.; Wang, Chenguang; Ruan, Shiling; Ho, Martin

    2014-01-01

    Summary Evaluation of medical treatments is frequently complicated by the presence of substantial placebo effects, especially on relatively subjective endpoints, and the standard solution to this problem is a randomized, double-blinded, placebo-controlled clinical trial. However, effective blinding does not guarantee that all patients have the same belief or mentality about which treatment they have received (or treatmentality, for brevity), making it difficult to interpret the usual intent-to-treat effect as a causal effect. We discuss the causal relationships among treatment, treatmentality and the clinical outcome of interest, and propose a causal model for joint evaluation of placebo and treatment-specific effects. The model highlights the importance of measuring and incorporating patient treatmentality and suggests that each treatment group should be considered a separate observational study with a patient's treatmentality playing the role of an uncontrolled exposure. This perspective allows us to adapt existing methods for dealing with confounding to joint estimation of placebo and treatment-specific effects using measured treatmentality data, commonly known as blinding assessment data. We first apply this approach to the most common type of blinding assessment data, which is categorical, and illustrate the methods using an example from asthma. We then propose that blinding assessment data can be collected as a continuous variable, specifically when a patient's treatmentality is measured as a subjective probability, and describe analytic methods for that case. PMID:23432119

  11. Nonlinear structural joint model updating based on instantaneous characteristics of dynamic responses

    NASA Astrophysics Data System (ADS)

    Wang, Zuo-Cai; Xin, Yu; Ren, Wei-Xin

    2016-08-01

    This paper proposes a new nonlinear joint model updating method for shear type structures based on the instantaneous characteristics of the decomposed structural dynamic responses. To obtain an accurate representation of a nonlinear system's dynamics, the nonlinear joint model is described as the nonlinear spring element with bilinear stiffness. The instantaneous frequencies and amplitudes of the decomposed mono-component are first extracted by the analytical mode decomposition (AMD) method. Then, an objective function based on the residuals of the instantaneous frequencies and amplitudes between the experimental structure and the nonlinear model is created for the nonlinear joint model updating. The optimal values of the nonlinear joint model parameters are obtained by minimizing the objective function using the simulated annealing global optimization method. To validate the effectiveness of the proposed method, a single-story shear type structure subjected to earthquake and harmonic excitations is simulated as a numerical example. Then, a beam structure with multiple local nonlinear elements subjected to earthquake excitation is also simulated. The nonlinear beam structure is updated based on the global and local model using the proposed method. The results show that the proposed local nonlinear model updating method is more effective for structures with multiple local nonlinear elements. Finally, the proposed method is verified by the shake table test of a real high voltage switch structure. The accuracy of the proposed method is quantified both in numerical and experimental applications using the defined error indices. Both the numerical and experimental results have shown that the proposed method can effectively update the nonlinear joint model.

  12. On joint subtree distributions under two evolutionary models.

    PubMed

    Wu, Taoyang; Choi, Kwok Pui

    2016-04-01

    In population and evolutionary biology, hypotheses about micro-evolutionary and macro-evolutionary processes are commonly tested by comparing the shape indices of empirical evolutionary trees with those predicted by neutral models. A key ingredient in this approach is the ability to compute and quantify distributions of various tree shape indices under random models of interest. As a step to meet this challenge, in this paper we investigate the joint distribution of cherries and pitchforks (that is, subtrees with two and three leaves) under two widely used null models: the Yule-Harding-Kingman (YHK) model and the proportional to distinguishable arrangements (PDA) model. Based on two novel recursive formulae, we propose a dynamic approach to numerically compute the exact joint distribution (and hence the marginal distributions) for trees of any size. We also obtained insights into the statistical properties of trees generated under these two models, including a constant correlation between the cherry and the pitchfork distributions under the YHK model, and the log-concavity and unimodality of the cherry distributions under both models. In addition, we show that there exists a unique change point for the cherry distributions between these two models. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Joint modelling of repeated measurement and time-to-event data: an introductory tutorial.

    PubMed

    Asar, Özgür; Ritchie, James; Kalra, Philip A; Diggle, Peter J

    2015-02-01

    The term 'joint modelling' is used in the statistical literature to refer to methods for simultaneously analysing longitudinal measurement outcomes, also called repeated measurement data, and time-to-event outcomes, also called survival data. A typical example from nephrology is a study in which the data from each participant consist of repeated estimated glomerular filtration rate (eGFR) measurements and time to initiation of renal replacement therapy (RRT). Joint models typically combine linear mixed effects models for repeated measurements and Cox models for censored survival outcomes. Our aim in this paper is to present an introductory tutorial on joint modelling methods, with a case study in nephrology. We describe the development of the joint modelling framework and compare the results with those obtained by the more widely used approaches of conducting separate analyses of the repeated measurements and survival times based on a linear mixed effects model and a Cox model, respectively. Our case study concerns a data set from the Chronic Renal Insufficiency Standards Implementation Study (CRISIS). We also provide details of our open-source software implementation to allow others to replicate and/or modify our analysis. The results for the conventional linear mixed effects model and the longitudinal component of the joint models were found to be similar. However, there were considerable differences between the results for the Cox model with time-varying covariate and the time-to-event component of the joint model. For example, the relationship between kidney function as measured by eGFR and the hazard for initiation of RRT was significantly underestimated by the Cox model that treats eGFR as a time-varying covariate, because the Cox model does not take measurement error in eGFR into account. Joint models should be preferred for simultaneous analyses of repeated measurement and survival data, especially when the former is measured with error and the association

  14. A reduced Iwan model that includes pinning for bolted joint mechanics

    DOE PAGES

    Brake, M. R. W.

    2016-10-28

    Bolted joints are prevalent in most assembled structures; however, predictive models for their behavior do not exist. Calibrated models, such as the Iwan model, are able to predict the response of a jointed structure over a range of excitations once calibrated at a nominal load. The Iwan model, though, is not widely adopted due to the high computational expense of implementation. To address this, an analytical solution of the Iwan model is derived under the hypothesis that for an arbitrary load reversal, there is a new distribution of dry friction elements, which are now stuck, that approximately resemble a scaledmore » version of the original distribution of dry friction elements. The dry friction elements internal to the Iwan model do not have a uniform set of parameters and are described by a distribution of parameters, i.e., which internal dry friction elements are stuck or slipping at a given load, that ultimately governs the behavior of the joint as it transitions from microslip to macroslip. This hypothesis allows the model to require no information from previous loading cycles. Additionally, the model is extended to include the pinning behavior inherent in a bolted joint. Modifications of the resulting framework are discussed to highlight how the constitutive model for friction can be changed (in the case of an Iwan–Stribeck formulation) or how the distribution of dry friction elements can be changed (as is the case for the Iwan plasticity model). Finally, the reduced Iwan plus pinning model is then applied to the Brake–Reuß beam in order to discuss methods to deduce model parameters from experimental data.« less

  15. Efficient computation of the joint probability of multiple inherited risk alleles from pedigree data.

    PubMed

    Madsen, Thomas; Braun, Danielle; Peng, Gang; Parmigiani, Giovanni; Trippa, Lorenzo

    2018-06-25

    The Elston-Stewart peeling algorithm enables estimation of an individual's probability of harboring germline risk alleles based on pedigree data, and serves as the computational backbone of important genetic counseling tools. However, it remains limited to the analysis of risk alleles at a small number of genetic loci because its computing time grows exponentially with the number of loci considered. We propose a novel, approximate version of this algorithm, dubbed the peeling and paring algorithm, which scales polynomially in the number of loci. This allows extending peeling-based models to include many genetic loci. The algorithm creates a trade-off between accuracy and speed, and allows the user to control this trade-off. We provide exact bounds on the approximation error and evaluate it in realistic simulations. Results show that the loss of accuracy due to the approximation is negligible in important applications. This algorithm will improve genetic counseling tools by increasing the number of pathogenic risk alleles that can be addressed. To illustrate we create an extended five genes version of BRCAPRO, a widely used model for estimating the carrier probabilities of BRCA1 and BRCA2 risk alleles and assess its computational properties. © 2018 WILEY PERIODICALS, INC.

  16. Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.

    PubMed

    Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih

    2016-10-01

    In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.

  17. Models for joint ophthalmology-optometry patient management.

    PubMed

    Kim, John J; Kim, Christine M

    2011-07-01

    American Academy of Ophthalmology (AAO) and American Society of Cataract and Refractive Surgery (ASCRS) presented a joint position paper in February 2000 declaring that they do not support routine comanagement of patients with the optometrists. American Optometric Association and American Academy of Optometry quickly responded in support of AAO and ASCRS. All four entities did not preclude legitimate and proper comanagement arrangements. Since that time, the pattern of practice has changed, requiring us to rethink our positions. This paper is written to provide a possible model for the ophthalmology-optometry practice management in ophthalmic surgeries including refractive surgery. Since the publication of the Joint Position Paper, the concept of comanagement has faded and a new model of integrated management has evolved. This has occurred as the changes in the employment pattern of the ophthalmic practice have incorporated optometrists into its fold. This evolution allowed ophthalmic and optometric community to co-exist and thrive to provide better patient care.

  18. Mechanical model of suture joints with fibrous connective layer

    NASA Astrophysics Data System (ADS)

    Miroshnichenko, Kateryna; Liu, Lei; Tsukrov, Igor; Li, Yaning

    2018-02-01

    A composite model for suture joints with a connective layer of aligned fibers embedded in soft matrix is proposed. Based on the principle of complementary virtual work, composite cylinder assemblage (CCA) approach and generalized self-consistent micro-mechanical models, a hierarchical homogenization methodology is developed to systematically quantify the synergistic effects of suture morphology and fiber orientation on the overall mechanical properties of sutures. Suture joints with regular triangular wave-form serve as an example material system to apply this methodology. Both theoretical and finite element mechanical models are developed and compared to evaluate the overall normal stiffness of sutures as a function of wavy morphology of sutures, fiber orientation, fiber volume fraction, and the mechanical properties of fibers and matrix in the interfacial layer. It is found that generally due to the anisotropy-induced coupling effects between tensile and shear deformation, the effective normal stiffness of sutures is highly dependent on the fiber orientation in the connective layer. Also, the effective shear modulus of the connective layer and the stiffness ratio between the fiber and matrix significantly influence the effects of fiber orientation. In addition, optimal fiber orientations are found to maximize the stiffness of suture joints.

  19. Review of Literature for Model Assisted Probability of Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Ryan M.; Crawford, Susan L.; Lareau, John P.

    This is a draft technical letter report for NRC client documenting a literature review of model assisted probability of detection (MAPOD) for potential application to nuclear power plant components for improvement of field NDE performance estimations.

  20. Modeling spatial variation in avian survival and residency probabilities

    USGS Publications Warehouse

    Saracco, James F.; Royle, J. Andrew; DeSante, David F.; Gardner, Beth

    2010-01-01

    The importance of understanding spatial variation in processes driving animal population dynamics is widely recognized. Yet little attention has been paid to spatial modeling of vital rates. Here we describe a hierarchical spatial autoregressive model to provide spatially explicit year-specific estimates of apparent survival (phi) and residency (pi) probabilities from capture-recapture data. We apply the model to data collected on a declining bird species, Wood Thrush (Hylocichla mustelina), as part of a broad-scale bird-banding network, the Monitoring Avian Productivity and Survivorship (MAPS) program. The Wood Thrush analysis showed variability in both phi and pi among years and across space. Spatial heterogeneity in residency probability was particularly striking, suggesting the importance of understanding the role of transients in local populations. We found broad-scale spatial patterning in Wood Thrush phi and pi that lend insight into population trends and can direct conservation and research. The spatial model developed here represents a significant advance over approaches to investigating spatial pattern in vital rates that aggregate data at coarse spatial scales and do not explicitly incorporate spatial information in the model. Further development and application of hierarchical capture-recapture models offers the opportunity to more fully investigate spatiotemporal variation in the processes that drive population changes.

  1. Interpretation of the results of statistical measurements. [search for basic probability model

    NASA Technical Reports Server (NTRS)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  2. The Analysis of Adhesively Bonded Advanced Composite Joints Using Joint Finite Elements

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott E.; Waas, Anthony M.

    2012-01-01

    The design and sizing of adhesively bonded joints has always been a major bottleneck in the design of composite vehicles. Dense finite element (FE) meshes are required to capture the full behavior of a joint numerically, but these dense meshes are impractical in vehicle-scale models where a course mesh is more desirable to make quick assessments and comparisons of different joint geometries. Analytical models are often helpful in sizing, but difficulties arise in coupling these models with full-vehicle FE models. Therefore, a joint FE was created which can be used within structural FE models to make quick assessments of bonded composite joints. The shape functions of the joint FE were found by solving the governing equations for a structural model for a joint. By analytically determining the shape functions of the joint FE, the complex joint behavior can be captured with very few elements. This joint FE was modified and used to consider adhesives with functionally graded material properties to reduce the peel stress concentrations located near adherend discontinuities. Several practical concerns impede the actual use of such adhesives. These include increased manufacturing complications, alterations to the grading due to adhesive flow during manufacturing, and whether changing the loading conditions significantly impact the effectiveness of the grading. An analytical study is conducted to address these three concerns. Furthermore, proof-of-concept testing is conducted to show the potential advantages of functionally graded adhesives. In this study, grading is achieved by strategically placing glass beads within the adhesive layer at different densities along the joint. Furthermore, the capability to model non-linear adhesive constitutive behavior with large rotations was developed, and progressive failure of the adhesive was modeled by re-meshing the joint as the adhesive fails. Results predicted using the joint FE was compared with experimental results for various

  3. A Joint Replenishment Inventory Model with Lost Sales

    NASA Astrophysics Data System (ADS)

    Devy, N. L.; Ai, T. J.; Astanti, R. D.

    2018-04-01

    This paper deals with two items joint replenishment inventory problem, in which the demand of each items are constant and deterministic. Inventory replenishment of items is conducted periodically every T time intervals. Among of these replenishments, joint replenishment of both items is possible. It is defined that item i is replenished every ZiT time intervals. Replenishment of items are instantaneous. All of shortages are considered as lost sales. The maximum allowance for lost sales of item i is Si. Mathematical model is formulated in order to determining the basic time cycle T, replenishment multiplier Zi , and maximum lost sales Si in order to minimize the total cost per unit time. A solution methodology is proposed for solve the model and a numerical example is provided for demonstrating the effectiveness of the proposed methodology.

  4. Multistate modeling of habitat dynamics: Factors affecting Florida scrub transition probabilities

    USGS Publications Warehouse

    Breininger, D.R.; Nichols, J.D.; Duncan, B.W.; Stolen, Eric D.; Carter, G.M.; Hunt, D.K.; Drese, J.H.

    2010-01-01

    Many ecosystems are influenced by disturbances that create specific successional states and habitat structures that species need to persist. Estimating transition probabilities between habitat states and modeling the factors that influence such transitions have many applications for investigating and managing disturbance-prone ecosystems. We identify the correspondence between multistate capture-recapture models and Markov models of habitat dynamics. We exploit this correspondence by fitting and comparing competing models of different ecological covariates affecting habitat transition probabilities in Florida scrub and flatwoods, a habitat important to many unique plants and animals. We subdivided a large scrub and flatwoods ecosystem along central Florida's Atlantic coast into 10-ha grid cells, which approximated average territory size of the threatened Florida Scrub-Jay (Aphelocoma coerulescens), a management indicator species. We used 1.0-m resolution aerial imagery for 1994, 1999, and 2004 to classify grid cells into four habitat quality states that were directly related to Florida Scrub-Jay source-sink dynamics and management decision making. Results showed that static site features related to fire propagation (vegetation type, edges) and temporally varying disturbances (fires, mechanical cutting) best explained transition probabilities. Results indicated that much of the scrub and flatwoods ecosystem was resistant to moving from a degraded state to a desired state without mechanical cutting, an expensive restoration tool. We used habitat models parameterized with the estimated transition probabilities to investigate the consequences of alternative management scenarios on future habitat dynamics. We recommend this multistate modeling approach as being broadly applicable for studying ecosystem, land cover, or habitat dynamics. The approach provides maximum-likelihood estimates of transition parameters, including precision measures, and can be used to assess

  5. Statistical validation of normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Aggregate and individual replication probability within an explicit model of the research process.

    PubMed

    Miller, Jeff; Schwarz, Wolf

    2011-09-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by obtaining either a statistically significant result in the same direction or any effect in that direction. We analyze both the probability of successfully replicating a particular experimental effect (i.e., the individual replication probability) and the average probability of successful replication across different studies within some research context (i.e., the aggregate replication probability), and we identify the conditions under which the latter can be approximated using the formulas of Killeen (2005a, 2007). We show how both of these probabilities depend on parameters of the research context that would rarely be known in practice. In addition, we show that the statistical uncertainty associated with the size of an initial observed effect would often prevent accurate estimation of the desired individual replication probability even if these research context parameters were known exactly. We conclude that accurate estimates of replication probability are generally unattainable.

  7. Multi-Scale Computational Modeling of Ni-Base Superalloy Brazed Joints for Gas Turbine Applications

    NASA Astrophysics Data System (ADS)

    Riggs, Bryan

    Brazed joints are commonly used in the manufacture and repair of aerospace components including high temperature gas turbine components made of Ni-base superalloys. For such critical applications, it is becoming increasingly important to account for the mechanical strength and reliability of the brazed joint. However, material properties of brazed joints are not readily available and methods for evaluating joint strength such as those listed in AWS C3.2 have inherent challenges compared with testing bulk materials. In addition, joint strength can be strongly influenced by the degree of interaction between the filler metal (FM) and the base metal (BM), the joint design, and presence of flaws or defects. As a result, there is interest in the development of a multi-scale computational model to predict the overall mechanical behavior and fitness-for-service of brazed joints. Therefore, the aim of this investigation was to generate data and methodology to support such a model for Ni-base superalloy brazed joints with conventional Ni-Cr-B based FMs. Based on a review of the technical literature a multi-scale modeling approach was proposed to predict the overall performance of brazed joints by relating mechanical properties to the brazed joint microstructure. This approach incorporates metallurgical characterization, thermodynamic/kinetic simulations, mechanical testing, fracture mechanics and finite element analysis (FEA) modeling to estimate joint properties based on the initial BM/FM composition and brazing process parameters. Experimental work was carried out in each of these areas to validate the multi-scale approach and develop improved techniques for quantifying brazed joint properties. Two Ni-base superalloys often used in gas turbine applications, Inconel 718 and CMSX-4, were selected for study and vacuum furnace brazed using two common FMs, BNi-2 and BNi-9. Metallurgical characterization of these brazed joints showed two primary microstructural regions; a soft

  8. Influence of Joint Angle on EMG-Torque Model During Constant-Posture, Torque-Varying Contractions.

    PubMed

    Liu, Pu; Liu, Lukai; Clancy, Edward A

    2015-11-01

    Relating the electromyogram (EMG) to joint torque is useful in various application areas, including prosthesis control, ergonomics and clinical biomechanics. Limited study has related EMG to torque across varied joint angles, particularly when subjects performed force-varying contractions or when optimized modeling methods were utilized. We related the biceps-triceps surface EMG of 22 subjects to elbow torque at six joint angles (spanning 60° to 135°) during constant-posture, torque-varying contractions. Three nonlinear EMG σ -torque models, advanced EMG amplitude (EMG σ ) estimation processors (i.e., whitened, multiple-channel) and the duration of data used to train models were investigated. When EMG-torque models were formed separately for each of the six distinct joint angles, a minimum "gold standard" error of 4.01±1.2% MVC(F90) resulted (i.e., error relative to maximum voluntary contraction at 90° flexion). This model structure, however, did not directly facilitate interpolation across angles. The best model which did so achieved a statistically equivalent error of 4.06±1.2% MVC(F90). Results demonstrated that advanced EMG σ processors lead to improved joint torque estimation as do longer model training durations.

  9. Probable flood predictions in ungauged coastal basins of El Salvador

    USGS Publications Warehouse

    Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.

    2008-01-01

    A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.

  10. Development and evaluation of a musculoskeletal model of the elbow joint complex

    NASA Technical Reports Server (NTRS)

    Gonzalez, Roger V.; Hutchins, E. L.; Barr, Ronald E.; Abraham, Lawrence D.

    1993-01-01

    This paper describes the development and evaluation of a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. The length, velocity, and moment arm for each of the eight musculotendon actuators were based on skeletal anatomy and position. Musculotendon parameters were determined for each actuator and verified by comparing analytical torque-angle curves with experimental joint torque data. The parameters and skeletal geometry were also utilized in the musculoskeletal model for the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by parameterized optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing ballistic elbow joint complex movements.

  11. Mixed models approaches for joint modeling of different types of responses.

    PubMed

    Ivanova, Anna; Molenberghs, Geert; Verbeke, Geert

    2016-01-01

    In many biomedical studies, one jointly collects longitudinal continuous, binary, and survival outcomes, possibly with some observations missing. Random-effects models, sometimes called shared-parameter models or frailty models, received a lot of attention. In such models, the corresponding variance components can be employed to capture the association between the various sequences. In some cases, random effects are considered common to various sequences, perhaps up to a scaling factor; in others, there are different but correlated random effects. Even though a variety of data types has been considered in the literature, less attention has been devoted to ordinal data. For univariate longitudinal or hierarchical data, the proportional odds mixed model (POMM) is an instance of the generalized linear mixed model (GLMM; Breslow and Clayton, 1993). Ordinal data are conveniently replaced by a parsimonious set of dummies, which in the longitudinal setting leads to a repeated set of dummies. When ordinal longitudinal data are part of a joint model, the complexity increases further. This is the setting considered in this paper. We formulate a random-effects based model that, in addition, allows for overdispersion. Using two case studies, it is shown that the combination of random effects to capture association with further correction for overdispersion can improve the model's fit considerably and that the resulting models allow to answer research questions that could not be addressed otherwise. Parameters can be estimated in a fairly straightforward way, using the SAS procedure NLMIXED.

  12. Joint scale-change models for recurrent events and failure time.

    PubMed

    Xu, Gongjun; Chiou, Sy Han; Huang, Chiung-Yu; Wang, Mei-Cheng; Yan, Jun

    2017-01-01

    Recurrent event data arise frequently in various fields such as biomedical sciences, public health, engineering, and social sciences. In many instances, the observation of the recurrent event process can be stopped by the occurrence of a correlated failure event, such as treatment failure and death. In this article, we propose a joint scale-change model for the recurrent event process and the failure time, where a shared frailty variable is used to model the association between the two types of outcomes. In contrast to the popular Cox-type joint modeling approaches, the regression parameters in the proposed joint scale-change model have marginal interpretations. The proposed approach is robust in the sense that no parametric assumption is imposed on the distribution of the unobserved frailty and that we do not need the strong Poisson-type assumption for the recurrent event process. We establish consistency and asymptotic normality of the proposed semiparametric estimators under suitable regularity conditions. To estimate the corresponding variances of the estimators, we develop a computationally efficient resampling-based procedure. Simulation studies and an analysis of hospitalization data from the Danish Psychiatric Central Register illustrate the performance of the proposed method.

  13. Transitional probability-based model for HPV clearance in HIV-1-positive adolescent females.

    PubMed

    Kravchenko, Julia; Akushevich, Igor; Sudenga, Staci L; Wilson, Craig M; Levitan, Emily B; Shrestha, Sadeep

    2012-01-01

    HIV-1-positive patients clear the human papillomavirus (HPV) infection less frequently than HIV-1-negative. Datasets for estimating HPV clearance probability often have irregular measurements of HPV status and risk factors. A new transitional probability-based model for estimation of probability of HPV clearance was developed to fully incorporate information on HIV-1-related clinical data, such as CD4 counts, HIV-1 viral load (VL), highly active antiretroviral therapy (HAART), and risk factors (measured quarterly), and HPV infection status (measured at 6-month intervals). Data from 266 HIV-1-positive and 134 at-risk HIV-1-negative adolescent females from the Reaching for Excellence in Adolescent Care and Health (REACH) cohort were used in this study. First, the associations were evaluated using the Cox proportional hazard model, and the variables that demonstrated significant effects on HPV clearance were included in transitional probability models. The new model established the efficacy of CD4 cell counts as a main clearance predictor for all type-specific HPV phylogenetic groups. The 3-month probability of HPV clearance in HIV-1-infected patients significantly increased with increasing CD4 counts for HPV16/16-like (p<0.001), HPV18/18-like (p<0.001), HPV56/56-like (p = 0.05), and low-risk HPV (p<0.001) phylogenetic groups, with the lowest probability found for HPV16/16-like infections (21.60±1.81% at CD4 level 200 cells/mm(3), p<0.05; and 28.03±1.47% at CD4 level 500 cells/mm(3)). HIV-1 VL was a significant predictor for clearance of low-risk HPV infections (p<0.05). HAART (with protease inhibitor) was significant predictor of probability of HPV16 clearance (p<0.05). HPV16/16-like and HPV18/18-like groups showed heterogeneity (p<0.05) in terms of how CD4 counts, HIV VL, and HAART affected probability of clearance of each HPV infection. This new model predicts the 3-month probability of HPV infection clearance based on CD4 cell counts and other HIV-1-related

  14. Surrogate modeling of deformable joint contact using artificial neural networks.

    PubMed

    Eskinazi, Ilan; Fregly, Benjamin J

    2015-09-01

    Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Surrogate Modeling of Deformable Joint Contact using Artificial Neural Networks

    PubMed Central

    Eskinazi, Ilan; Fregly, Benjamin J.

    2016-01-01

    Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. PMID:26220591

  16. An empirical model for earthquake probabilities in the San Francisco Bay region, California, 2002-2031

    USGS Publications Warehouse

    Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.

    2003-01-01

    The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ≥6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ≥6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ≥6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the

  17. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE PAGES

    Butler, Troy; Wildey, Timothy

    2018-01-01

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  18. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Troy; Wildey, Timothy

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  19. Statistical Surrogate Models for Estimating Probability of High-Consequence Climate Change

    NASA Astrophysics Data System (ADS)

    Field, R.; Constantine, P.; Boslough, M.

    2011-12-01

    We have posed the climate change problem in a framework similar to that used in safety engineering, by acknowledging that probabilistic risk assessments focused on low-probability, high-consequence climate events are perhaps more appropriate than studies focused simply on best estimates. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We have developed specialized statistical surrogate models (SSMs) that can be used to make predictions about the tails of the associated probability distributions. A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field, that is, a random variable for every fixed location in the atmosphere at all times. The SSM can be calibrated to available spatial and temporal data from existing climate databases, or to a collection of outputs from general circulation models. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework was also developed to provide quantitative measures of confidence, via Bayesian credible intervals, to assess these risks. To illustrate the use of the SSM, we considered two collections of NCAR CCSM 3.0 output data. The first collection corresponds to average December surface temperature for years 1990-1999 based on a collection of 8 different model runs obtained from the Program for Climate Model Diagnosis and Intercomparison (PCMDI). We calibrated the surrogate model to the available model data and make various point predictions. We also analyzed average precipitation rate in June, July, and August over a 54-year period assuming a cyclic Y2K ocean model. We

  20. MRI-based stereolithographic models of the temporomandibular joint: technical innovation.

    PubMed

    Undt, G; Wild, K; Reuther, G; Ewers, R

    2000-10-01

    A new technique of manufacturing dual-colour stereolithographic models of hard and soft tissues of the temporomandibular joint (TMJ) is presented. Sagittal T1/PD weighted magnetic resonance (MR) images of joints with and without disc displacement were obtained in the closed and open mouth positions. Individual interactive contour identification of bony structures and the articular disc followed by binary interpolation provided the data for the generation of acrylic TMJ models. Three dimensional in vivo visualization of the articular disc in relation to bony structures in the closed and open mouth positions allows a new perception of normal and pathological TMJ anatomy.

  1. Leveraging R&D Resources via the Joint LLC Model

    NASA Astrophysics Data System (ADS)

    Ganz, Matthew W.

    2008-03-01

    Industrial scientific research labs have become increasingly stressed in recent years by a variety of external forces. Both corporations and government funding agencies have shifted their priorities from long-term fundamental research toward projects that have a high probability of shorter-term payoff. Industrial funding has been further stressed by an increasing demand for quarterly results and fierce global competition. Industry leaders are now asking their R&D labs for ``home runs” and not just a solid base in the physical sciences. The end of the Cold War has also left the US without a declared enemy whose overt intention was to defeat us through a mastery of large-scale weaponry based upon exploitation of fundamental physics. This, when combined with a bona-fide need for technology gap fillers to respond to on-the-ground threats in the current Middle East conflicts, has led to diminished government emphasis on long-term research in the physical sciences. Simultaneously, the global sources of R&D spending are expanding. The dramatic growth of private equity in the technology development arena has both drawn talent from industry and changed the expectations on researchers. R&D spending in China, India and many other countries is growing significantly. Thus, in order to become relevant, industry must now keep its finger on the pulse of the hundreds of billions of dollars being invested privately and publicly around the world. HRL Laboratories, LLC in Malibu, California represents a unique and successful new business model for industrial R&D. HRL was founded by Howard Hughes in 1948 as the Hughes Research Laboratory and for more than four decades was the internal R&D lab for the Hughes Aircraft Company. After a series of mergers, acquisitions and divestitures over the past 15 years, HRL is now a stand-alone LLC that is owned jointly by General Motors and the Boeing Company. HRL, with a staff of about 300, performs R&D services for GM and Boeing as well as for

  2. Joint-layer encoder optimization for HEVC scalable extensions

    NASA Astrophysics Data System (ADS)

    Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong

    2014-09-01

    Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.

  3. Computer models for predicting the probability of violating CO air quality standards : the model SIMCO.

    DOT National Transportation Integrated Search

    1982-01-01

    This report presents the user instructions and data requirements for SIMCO, a combined simulation and probability computer model developed to quantify and evaluate carbon monoxide in roadside environments. The model permits direct determinations of t...

  4. Bayesian inference for joint modelling of longitudinal continuous, binary and ordinal events.

    PubMed

    Li, Qiuju; Pan, Jianxin; Belcher, John

    2016-12-01

    In medical studies, repeated measurements of continuous, binary and ordinal outcomes are routinely collected from the same patient. Instead of modelling each outcome separately, in this study we propose to jointly model the trivariate longitudinal responses, so as to take account of the inherent association between the different outcomes and thus improve statistical inferences. This work is motivated by a large cohort study in the North West of England, involving trivariate responses from each patient: Body Mass Index, Depression (Yes/No) ascertained with cut-off score not less than 8 at the Hospital Anxiety and Depression Scale, and Pain Interference generated from the Medical Outcomes Study 36-item short-form health survey with values returned on an ordinal scale 1-5. There are some well-established methods for combined continuous and binary, or even continuous and ordinal responses, but little work was done on the joint analysis of continuous, binary and ordinal responses. We propose conditional joint random-effects models, which take into account the inherent association between the continuous, binary and ordinal outcomes. Bayesian analysis methods are used to make statistical inferences. Simulation studies show that, by jointly modelling the trivariate outcomes, standard deviations of the estimates of parameters in the models are smaller and much more stable, leading to more efficient parameter estimates and reliable statistical inferences. In the real data analysis, the proposed joint analysis yields a much smaller deviance information criterion value than the separate analysis, and shows other good statistical properties too. © The Author(s) 2014.

  5. Joint Composable Object Model and LVC Methodology

    NASA Technical Reports Server (NTRS)

    Rheinsmith, Richard; Wallace, Jeffrey; Bizub, Warren; Ceranowicz, Andy; Cutts, Dannie; Powell, Edward T.; Gustavson, Paul; Lutz, Robert; McCloud, Terrell

    2010-01-01

    Within the Department of Defense, multiple architectures are created to serve and fulfill one or several specific service or mission related LVC training goals. Multiple Object Models exist across and within those architectures and it is there that those disparate object models are a major source of interoperability problems when developing and constructing the training scenarios. The two most commonly used architectures are; HLA and TENA, with DIS and CTIA following close behind in terms of the number of users. Although these multiple architectures can share and exchange data the underlying meta-models for runtime data exchange are quite different, requiring gateways/translators to bridge between the different object model representations; while the Department of Defense's use of gateways are generally effective in performing these functions, as the LVC environment increases so too does the cost and complexity of these gateways. Coupled with the wide range of different object models across the various user communities we increase the propensity for run time errors, increased programmer stop gap measures during coordinated exercises, or failure of the system as a whole due to unknown or unforeseen incompatibilities. The Joint Composable Object Model (JCOM) project was established under an M&S Steering Committee (MSSC)-sponsored effort with oversight and control placed under the Joint Forces Command J7 Advanced Concepts Program Directorate. The purpose of this paper is to address the initial and the current progress that has been made in the following areas; the Conceptual Model Development Format, the Common Object Model, the Architecture Neutral Data Exchange Model (ANDEM), and the association methodology to allow the re-use of multiple architecture object models and the development of the prototype persistent reusable library.

  6. Bayesian seismic inversion based on rock-physics prior modeling for the joint estimation of acoustic impedance, porosity and lithofacies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio

    We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less

  7. Dynamic modeling and optimal joint torque coordination of advanced robotic systems

    NASA Astrophysics Data System (ADS)

    Kang, Hee-Jun

    The development is documented of an efficient dynamic modeling algorithm and the subsequent optimal joint input load coordination of advanced robotic systems for industrial application. A closed-form dynamic modeling algorithm for the general closed-chain robotic linkage systems is presented. The algorithm is based on the transfer of system dependence from a set of open chain Lagrangian coordinates to any desired system generalized coordinate set of the closed-chain. Three different techniques for evaluation of the kinematic closed chain constraints allow the representation of the dynamic modeling parameters in terms of system generalized coordinates and have no restriction with regard to kinematic redundancy. The total computational requirement of the closed-chain system model is largely dependent on the computation required for the dynamic model of an open kinematic chain. In order to improve computational efficiency, modification of an existing open-chain KIC based dynamic formulation is made by the introduction of the generalized augmented body concept. This algorithm allows a 44 pct. computational saving over the current optimized one (O(N4), 5995 when N = 6). As means of resolving redundancies in advanced robotic systems, local joint torque optimization is applied for effectively using actuator power while avoiding joint torque limits. The stability problem in local joint torque optimization schemes is eliminated by using fictitious dissipating forces which act in the necessary null space. The performance index representing the global torque norm is shown to be satisfactory. In addition, the resulting joint motion trajectory becomes conservative, after a transient stage, for repetitive cyclic end-effector trajectories. The effectiveness of the null space damping method is shown. The modular robot, which is built of well defined structural modules from a finite-size inventory and is controlled by one general computer system, is another class of evolving

  8. Use of two-dimensional transmission photoelastic models to study stresses in double-lap bolted joints

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Liu, D. H.

    1981-01-01

    The stress distribution in two hole connectors in a double lap joint configuration was studied. The following steps are described: (1) fabrication of photoelastic models of double lap double hole joints designed to determine the stresses in the inner lap; (2) assessment of the effects of joint geometry on the stresses in the inner lap; and (3) quantification of differences in the stresses near the two holes. The two holes were on the centerline of the joint and the joints were loaded in tension, parallel to the centerline. Acrylic slip fit pins through the holes served as fasteners. Two dimensional transmission photoelastic models were fabricated by using transparent acrylic outer laps and a photoelastic model material for the inner laps. It is concluded that the photoelastic fringe patterns which are visible when the models are loaded are due almost entirely to stresses in the inner lap.

  9. A Cervico-Thoraco-Lumbar Multibody Dynamic Model for the Estimation of Joint Loads and Muscle Forces.

    PubMed

    Khurelbaatar, Tsolmonbaatar; Kim, Kyungsoo; Hyuk Kim, Yoon

    2015-11-01

    Computational musculoskeletal models have been developed to predict mechanical joint loads on the human spine, such as the forces and moments applied to vertebral and facet joints and the forces that act on ligaments and muscles because of difficulties in the direct measurement of joint loads. However, many whole-spine models lack certain elements. For example, the detailed facet joints in the cervical region or the whole spine region may not be implemented. In this study, a detailed cervico-thoraco-lumbar multibody musculoskeletal model with all major ligaments, separated structures of facet contact and intervertebral disk joints, and the rib cage was developed. The model was validated by comparing the intersegmental rotations, ligament tensile forces, facet joint contact forces, compressive and shear forces on disks, and muscle forces were to those reported in previous experimental and computational studies both by region (cervical, thoracic, or lumbar regions) and for the whole model. The comparisons demonstrated that our whole spine model is consistent with in vitro and in vivo experimental studies and with computational studies. The model developed in this study can be used in further studies to better understand spine structures and injury mechanisms of spinal disorders.

  10. A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.

    ERIC Educational Resources Information Center

    Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven

    2003-01-01

    Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)

  11. Probability of spacesuit-induced fingernail trauma is associated with hand circumference.

    PubMed

    Opperman, Roedolph A; Waldie, James M A; Natapoff, Alan; Newman, Dava J; Jones, Jeffrey A

    2010-10-01

    A significant number of astronauts sustain hand injuries during extravehicular activity training and operations. These hand injuries have been known to cause fingernail delamination (onycholysis) that requires medical intervention. This study investigated correlations between the anthropometrics of the hand and susceptibility to injury. The analysis explored the hypothesis that crewmembers with a high finger-to-hand size ratio are more likely to experience injuries. A database of 232 crewmembers' injury records and anthropometrics was sourced from NASA Johnson Space Center. No significant effect of finger-to-hand size was found on the probability of injury, but circumference and width of the metacarpophalangeal (MCP) joint were found to be significantly associated with injuries by the Kruskal-Wallis test. A multivariate logistic regression showed that hand circumference is the dominant effect on the likelihood of onycholysis. Male crewmembers with a hand circumference > 22.86 cm (9") have a 19.6% probability of finger injury, but those with hand circumferences < or = 22.86 cm (9") only have a 5.6% chance of injury. Findings were similar for female crewmembers. This increased probability may be due to constriction at large MCP joints by the current NASA Phase VI glove. Constriction may lead to occlusion of vascular flow to the fingers that may increase the chances of onycholysis. Injury rates are lower on gloves such as the superseded series 4000 and the Russian Orlan that provide more volume for the MCP joint. This suggests that we can reduce onycholysis by modifying the design of the current gloves at the MCP joint.

  12. A Joint Model for Vitamin K-Dependent Clotting Factors and Anticoagulation Proteins.

    PubMed

    Ooi, Qing Xi; Wright, Daniel F B; Tait, R Campbell; Isbister, Geoffrey K; Duffull, Stephen B

    2017-12-01

    Warfarin acts by inhibiting the reduction of vitamin K (VK) to its active form, thereby decreasing the production of VK-dependent coagulation proteins. The aim of this research is to develop a joint model for the VK-dependent clotting factors II, VII, IX and X, and the anticoagulation proteins, proteins C and S, during warfarin initiation. Data from 18 patients with atrial fibrillation who had warfarin therapy initiated were available for analysis. Nine blood samples were collected from each subject at baseline, and at 1-5, 8, 15 and 29 days after warfarin initiation and assayed for factors II, VII, IX and X, and proteins C and S. Warfarin concentration-time data were not available. The coagulation proteins data were modelled in a stepwise manner using NONMEM ® Version 7.2. In the first stage, each of the coagulation proteins was modelled independently using a kinetic-pharmacodynamic model. In the subsequent step, the six kinetic-pharmacodynamic models were combined into a single joint model. One patient was administered VK and was excluded from the analysis. Each kinetic-pharmacodynamic model consisted of two parts: (1) a common one-compartment pharmacokinetic model with first-order absorption and elimination for warfarin; and (2) an inhibitory E max model linked to a turnover model for coagulation proteins. In the joint model, an unexpected pharmacodynamic lag was identified and the estimated degradation half-life of VK-dependent coagulation proteins were in agreement with previously published values. The model provided an adequate fit to the observed data. The joint model represents the first work to quantify the influence of warfarin on all six VK-dependent coagulation proteins simultaneously. Future work will expand the model to predict the influence of exogenously administered VK on the time course of clotting factor concentrations after warfarin overdose and during perioperative warfarin reversal procedures.

  13. Joint modeling and registration of cell populations in cohorts of high-dimensional flow cytometric data.

    PubMed

    Pyne, Saumyadipta; Lee, Sharon X; Wang, Kui; Irish, Jonathan; Tamayo, Pablo; Nazaire, Marc-Danie; Duong, Tarn; Ng, Shu-Kay; Hafler, David; Levy, Ronald; Nolan, Garry P; Mesirov, Jill; McLachlan, Geoffrey J

    2014-01-01

    In biomedical applications, an experimenter encounters different potential sources of variation in data such as individual samples, multiple experimental conditions, and multivariate responses of a panel of markers such as from a signaling network. In multiparametric cytometry, which is often used for analyzing patient samples, such issues are critical. While computational methods can identify cell populations in individual samples, without the ability to automatically match them across samples, it is difficult to compare and characterize the populations in typical experiments, such as those responding to various stimulations or distinctive of particular patients or time-points, especially when there are many samples. Joint Clustering and Matching (JCM) is a multi-level framework for simultaneous modeling and registration of populations across a cohort. JCM models every population with a robust multivariate probability distribution. Simultaneously, JCM fits a random-effects model to construct an overall batch template--used for registering populations across samples, and classifying new samples. By tackling systems-level variation, JCM supports practical biomedical applications involving large cohorts. Software for fitting the JCM models have been implemented in an R package EMMIX-JCM, available from http://www.maths.uq.edu.au/~gjm/mix_soft/EMMIX-JCM/.

  14. Numerical Model for the Study of the Strength and Failure Modes of Rock Containing Non-Persistent Joints

    NASA Astrophysics Data System (ADS)

    Vergara, Maximiliano R.; Van Sint Jan, Michel; Lorig, Loren

    2016-04-01

    The mechanical behavior of rock containing parallel non-persistent joint sets was studied using a numerical model. The numerical analysis was performed using the discrete element software UDEC. The use of fictitious joints allowed the inclusion of non-persistent joints in the model domain and simulating the progressive failure due to propagation of existing fractures. The material and joint mechanical parameters used in the model were obtained from experimental results. The results of the numerical model showed good agreement with the strength and failure modes observed in the laboratory. The results showed the large anisotropy in the strength resulting from variation of the joint orientation. Lower strength of the specimens was caused by the coalescence of fractures belonging to parallel joint sets. A correlation was found between geometrical parameters of the joint sets and the contribution of the joint sets strength in the global strength of the specimen. The results suggest that for the same dip angle with respect to the principal stresses; the uniaxial strength depends primarily on the joint spacing and the angle between joints tips and less on the length of the rock bridges (persistency). A relation between joint geometrical parameters was found from which the resulting failure mode can be predicted.

  15. Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model

    NASA Technical Reports Server (NTRS)

    Vallejo, Jonathon; Hejduk, Matt; Stamey, James

    2015-01-01

    We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.

  16. Modelling Technique for Demonstrating Gravity Collapse Structures in Jointed Rock.

    ERIC Educational Resources Information Center

    Stimpson, B.

    1979-01-01

    Described is a base-friction modeling technique for studying the development of collapse structures in jointed rocks. A moving belt beneath weak material is designed to simulate gravity. A description is given of the model frame construction. (Author/SA)

  17. Numerical Investigation of the Dynamic Properties of Intermittent Jointed Rock Models Subjected to Cyclic Uniaxial Compression

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Dai, Feng; Zhao, Tao; Xu, Nu-wen

    2017-01-01

    Intermittent jointed rocks, which exist in a myriad of engineering projects, are extraordinarily susceptible to cyclic loadings. Understanding the dynamic fatigue properties of jointed rocks is necessary for evaluating the stability of rock engineering structures. This study numerically investigated the influences of cyclic loading conditions (i.e., frequency, maximum stress and amplitude) and joint geometric configurations (i.e., dip angle, persistency and interspace) on the dynamic fatigue mechanisms of jointed rock models. A reduction model of stiffness and strength was first proposed, and then, sixteen cyclic uniaxial loading tests with distinct loading parameters and joint geometries were simulated. Our results indicate that the reduction model can effectively reproduce the hysteresis loops and the accumulative plastic deformation of jointed rocks in the cyclic process. Both the loading parameters and the joint geometries significantly affect the dynamic properties, including the irreversible strain, damage evolution, dynamic residual strength and fatigue life. Three failure modes of jointed rocks, which are principally controlled by joint geometries, occur in the simulations: splitting failure through the entire rock sample, sliding failure along joint planes and mixed failure, which are principally controlled by joint geometries. Furthermore, the progressive failure processes of the jointed rock samples are numerically observed, and the different loading stages can be distinguished by the relationship between the number of broken bonds and the axial stress.

  18. A transition-based joint model for disease named entity recognition and normalization.

    PubMed

    Lou, Yinxia; Zhang, Yue; Qian, Tao; Li, Fei; Xiong, Shufeng; Ji, Donghong

    2017-08-01

    Disease named entities play a central role in many areas of biomedical research, and automatic recognition and normalization of such entities have received increasing attention in biomedical research communities. Existing methods typically used pipeline models with two independent phases: (i) a disease named entity recognition (DER) system is used to find the boundaries of mentions in text and (ii) a disease named entity normalization (DEN) system is used to connect the mentions recognized to concepts in a controlled vocabulary. The main problems of such models are: (i) there is error propagation from DER to DEN and (ii) DEN is useful for DER, but pipeline models cannot utilize this. We propose a transition-based model to jointly perform disease named entity recognition and normalization, casting the output construction process into an incremental state transition process, learning sequences of transition actions globally, which correspond to joint structural outputs. Beam search and online structured learning are used, with learning being designed to guide search. Compared with the only existing method for joint DEN and DER, our method allows non-local features to be used, which significantly improves the accuracies. We evaluate our model on two corpora: the BioCreative V Chemical Disease Relation (CDR) corpus and the NCBI disease corpus. Experiments show that our joint framework achieves significantly higher performances compared to competitive pipeline baselines. Our method compares favourably to other state-of-the-art approaches. Data and code are available at https://github.com/louyinxia/jointRN. dhji@whu.edu.cn. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  19. [Prolonged mechanical ventilation probability model].

    PubMed

    Añón, J M; Gómez-Tello, V; González-Higueras, E; Oñoro, J J; Córcoles, V; Quintana, M; López-Martínez, J; Marina, L; Choperena, G; García-Fernández, A M; Martín-Delgado, C; Gordo, F; Díaz-Alersi, R; Montejo, J C; Lorenzo, A García de; Pérez-Arriaga, M; Madero, R

    2012-10-01

    To design a probability model for prolonged mechanical ventilation (PMV) using variables obtained during the first 24 hours of the start of MV. An observational, prospective, multicenter cohort study. Thirteen Spanish medical-surgical intensive care units. Adult patients requiring mechanical ventilation for more than 24 hours. None. APACHE II, SOFA, demographic data, clinical data, reason for mechanical ventilation, comorbidity, and functional condition. A multivariate risk model was constructed. The model contemplated a dependent variable with three possible conditions: 1. Early mortality; 2. Early extubation; and 3. PMV. Of the 1661 included patients, 67.9% (n=1127) were men. Age: 62.1±16.2 years. APACHE II: 20.3±7.5. Total SOFA: 8.4±3.5. The APACHE II and SOFA scores were higher in patients ventilated for 7 or more days (p=0.04 and p=0.0001, respectively). Noninvasive ventilation failure was related to PMV (p=0.005). A multivariate model for the three above exposed outcomes was generated. The overall accuracy of the model in the training and validation sample was 0.763 (95%IC: 0.729-0.804) and 0.751 (95%IC: 0.672-0.816), respectively. The likelihood ratios (LRs) for early extubation, involving a cutoff point of 0.65, in the training sample were LR (+): 2.37 (95%CI: 1.77-3.19) and LR (-): 0.47 (95%CI: 0.41-0.55). The LRs for the early mortality model, for a cutoff point of 0.73, in the training sample, were LR (+): 2.64 (95%CI: 2.01-3.4) and LR (-): 0.39 (95%CI: 0.30-0.51). The proposed model could be a helpful tool in decision making. However, because of its moderate accuracy, it should be considered as a first approach, and the results should be corroborated by further studies involving larger samples and the use of standardized criteria. Copyright © 2011 Elsevier España, S.L. y SEMICYUC. All rights reserved.

  20. LES/PDF studies of joint statistics of mixture fraction and progress variable in piloted methane jet flames with inhomogeneous inlet flows

    NASA Astrophysics Data System (ADS)

    Zhang, Pei; Barlow, Robert; Masri, Assaad; Wang, Haifeng

    2016-11-01

    The mixture fraction and progress variable are often used as independent variables for describing turbulent premixed and non-premixed flames. There is a growing interest in using these two variables for describing partially premixed flames. The joint statistical distribution of the mixture fraction and progress variable is of great interest in developing models for partially premixed flames. In this work, we conduct predictive studies of the joint statistics of mixture fraction and progress variable in a series of piloted methane jet flames with inhomogeneous inlet flows. The employed models combine large eddy simulations with the Monte Carlo probability density function (PDF) method. The joint PDFs and marginal PDFs are examined in detail by comparing the model predictions and the measurements. Different presumed shapes of the joint PDFs are also evaluated.

  1. Dynamic properties of human incudostapedial joint-Experimental measurement and finite element modeling.

    PubMed

    Jiang, Shangyuan; Gan, Rong Z

    2018-04-01

    The incudostapedial joint (ISJ) is a synovial joint connecting the incus and stapes in the middle ear. Mechanical properties of the ISJ directly affect sound transmission from the tympanic membrane to the cochlea. However, how ISJ properties change with frequency has not been investigated. In this paper, we report the dynamic properties of the human ISJ measured in eight samples using a dynamic mechanical analyzer (DMA) for frequencies from 1 to 80 Hz at three temperatures of 5, 25 and 37 °C. The frequency-temperature superposition (FTS) principle was used to extrapolate the results to 8 kHz. The complex modulus of ISJ was measured with a mean storage modulus of 1.14 MPa at 1 Hz that increased to 3.01 MPa at 8 kHz, and a loss modulus that increased from 0.07 to 0.47 MPa. A 3-dimensional finite element (FE) model consisting of the articular cartilage, joint capsule and synovial fluid was then constructed to derive mechanical properties of ISJ components by matching the model results to experimental data. Modeling results showed that mechanical properties of the joint capsule and synovial fluid affected the dynamic behavior of the joint. This study contributes to a better understanding of the structure-function relationship of the ISJ for sound transmission. Copyright © 2018. Published by Elsevier Ltd.

  2. Efficient finite element modelling for the investigation of the dynamic behaviour of a structure with bolted joints

    NASA Astrophysics Data System (ADS)

    Omar, R.; Rani, M. N. Abdul; Yunus, M. A.; Mirza, W. I. I. Wan Iskandar; Zin, M. S. Mohd

    2018-04-01

    A simple structure with bolted joints consists of the structural components, bolts and nuts. There are several methods to model the structures with bolted joints, however there is no reliable, efficient and economic modelling methods that can accurately predict its dynamics behaviour. Explained in this paper is an investigation that was conducted to obtain an appropriate modelling method for bolted joints. This was carried out by evaluating four different finite element (FE) models of the assembled plates and bolts namely the solid plates-bolts model, plates without bolt model, hybrid plates-bolts model and simplified plates-bolts model. FE modal analysis was conducted for all four initial FE models of the bolted joints. Results of the FE modal analysis were compared with the experimental modal analysis (EMA) results. EMA was performed to extract the natural frequencies and mode shapes of the test physical structure with bolted joints. Evaluation was made by comparing the number of nodes, number of elements, elapsed computer processing unit (CPU) time, and the total percentage of errors of each initial FE model when compared with EMA result. The evaluation showed that the simplified plates-bolts model could most accurately predict the dynamic behaviour of the structure with bolted joints. This study proved that the reliable, efficient and economic modelling of bolted joints, mainly the representation of the bolting, has played a crucial element in ensuring the accuracy of the dynamic behaviour prediction.

  3. Can generic knee joint models improve the measurement of osteoarthritic knee kinematics during squatting activity?

    PubMed

    Clément, Julien; Dumas, Raphaël; Hagemeister, Nicola; de Guise, Jaques A

    2017-01-01

    Knee joint kinematics derived from multi-body optimisation (MBO) still requires evaluation. The objective of this study was to corroborate model-derived kinematics of osteoarthritic knees obtained using four generic knee joint models used in musculoskeletal modelling - spherical, hinge, degree-of-freedom coupling curves and parallel mechanism - against reference knee kinematics measured by stereo-radiography. Root mean square errors ranged from 0.7° to 23.4° for knee rotations and from 0.6 to 9.0 mm for knee displacements. Model-derived knee kinematics computed from generic knee joint models was inaccurate. Future developments and experiments should improve the reliability of osteoarthritic knee models in MBO and musculoskeletal modelling.

  4. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    PubMed

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  5. How weak values emerge in joint measurements on cloned quantum systems.

    PubMed

    Hofmann, Holger F

    2012-07-13

    A statistical analysis of optimal universal cloning shows that it is possible to identify an ideal (but nonpositive) copying process that faithfully maps all properties of the original Hilbert space onto two separate quantum systems, resulting in perfect correlations for all observables. The joint probabilities for noncommuting measurements on separate clones then correspond to the real parts of the complex joint probabilities observed in weak measurements on a single system, where the measurements on the two clones replace the corresponding sequence of weak measurement and postselection. The imaginary parts of weak measurement statics can be obtained by replacing the cloning process with a partial swap operation. A controlled-swap operation combines both processes, making the complete weak measurement statistics accessible as a well-defined contribution to the joint probabilities of fully resolved projective measurements on the two output systems.

  6. Modelling the Probability of Landslides Impacting Road Networks

    NASA Astrophysics Data System (ADS)

    Taylor, F. E.; Malamud, B. D.

    2012-04-01

    During a landslide triggering event, the threat of landslides blocking roads poses a risk to logistics, rescue efforts and communities dependant on those road networks. Here we present preliminary results of a stochastic model we have developed to evaluate the probability of landslides intersecting a simple road network during a landslide triggering event and apply simple network indices to measure the state of the road network in the affected region. A 4000 x 4000 cell array with a 5 m x 5 m resolution was used, with a pre-defined simple road network laid onto it, and landslides 'randomly' dropped onto it. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m2 This statistical distribution was chosen based on three substantially complete triggered landslide inventories recorded in existing literature. The number of landslide areas (NL) selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. NL = 400 landslide areas chosen randomly for each iteration, and was based on several existing triggered landslide event inventories. A simple road network was chosen, in a 'T' shape configuration, with one road 1 x 4000 cells (5 m x 20 km) in a 'T' formation with another road 1 x 2000 cells (5 m x 10 km). The landslide areas were then randomly 'dropped' over the road array and indices such as the location, size (ABL) and number of road blockages (NBL) recorded. This process was performed 500 times (iterations) in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event with 400 landslides over a 400 km2 region, the number of road blocks per iteration, NBL,ranges from 0 to 7. The average blockage area for the 500 iterations (A¯ BL) is about 3000 m

  7. Recent Advances in Model-Assisted Probability of Detection

    NASA Technical Reports Server (NTRS)

    Thompson, R. Bruce; Brasche, Lisa J.; Lindgren, Eric; Swindell, Paul; Winfree, William P.

    2009-01-01

    The increased role played by probability of detection (POD) in structural integrity programs, combined with the significant time and cost associated with the purely empirical determination of POD, provides motivation for alternate means to estimate this important metric of NDE techniques. One approach to make the process of POD estimation more efficient is to complement limited empirical experiments with information from physics-based models of the inspection process or controlled laboratory experiments. The Model-Assisted Probability of Detection (MAPOD) Working Group was formed by the Air Force Research Laboratory, the FAA Technical Center, and NASA to explore these possibilities. Since the 2004 inception of the MAPOD Working Group, 11 meetings have been held in conjunction with major NDE conferences. This paper will review the accomplishments of this group, which includes over 90 members from around the world. Included will be a discussion of strategies developed to combine physics-based and empirical understanding, draft protocols that have been developed to guide application of the strategies, and demonstrations that have been or are being carried out in a number of countries. The talk will conclude with a discussion of future directions, which will include documentation of benefits via case studies, development of formal protocols for engineering practice, as well as a number of specific technical issues.

  8. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  9. Probability distributions for multimeric systems.

    PubMed

    Albert, Jaroslav; Rooman, Marianne

    2016-01-01

    We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.

  10. Modeling and optimization of joint quality for laser transmission joint of thermoplastic using an artificial neural network and a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiao; Zhang, Cheng; Li, Pin; Wang, Kai; Hu, Yang; Zhang, Peng; Liu, Huixia

    2012-11-01

    A central composite rotatable experimental design(CCRD) is conducted to design experiments for laser transmission joining of thermoplastic-Polycarbonate (PC). The artificial neural network was used to establish the relationships between laser transmission joining process parameters (the laser power, velocity, clamp pressure, scanning number) and joint strength and joint seam width. The developed mathematical models are tested by analysis of variance (ANOVA) method to check their adequacy and the effects of process parameters on the responses and the interaction effects of key process parameters on the quality are analyzed and discussed. Finally, the desirability function coupled with genetic algorithm is used to carry out the optimization of the joint strength and joint width. The results show that the predicted results of the optimization are in good agreement with the experimental results, so this study provides an effective method to enhance the joint quality.

  11. A three-dimensional model to assess the effect of ankle joint axis misalignments in ankle-foot orthoses.

    PubMed

    Fatone, Stefania; Johnson, William Brett; Tucker, Kerice

    2016-04-01

    Misalignment of an articulated ankle-foot orthosis joint axis with the anatomic joint axis may lead to discomfort, alterations in gait, and tissue damage. Theoretical, two-dimensional models describe the consequences of misalignments, but cannot capture the three-dimensional behavior of ankle-foot orthosis use. The purpose of this project was to develop a model to describe the effects of ankle-foot orthosis ankle joint misalignment in three dimensions. Computational simulation. Three-dimensional scans of a leg and ankle-foot orthosis were incorporated into a link segment model where the ankle-foot orthosis joint axis could be misaligned with the anatomic ankle joint axis. The leg/ankle-foot orthosis interface was modeled as a network of nodes connected by springs to estimate interface pressure. Motion between the leg and ankle-foot orthosis was calculated as the ankle joint moved through a gait cycle. While the three-dimensional model corroborated predictions of the previously published two-dimensional model that misalignments in the anterior -posterior direction would result in greater relative motion compared to misalignments in the proximal -distal direction, it provided greater insight showing that misalignments have asymmetrical effects. The three-dimensional model has been incorporated into a freely available computer program to assist others in understanding the consequences of joint misalignments. Models and simulations can be used to gain insight into functioning of systems of interest. We have developed a three-dimensional model to assess the effect of ankle joint axis misalignments in ankle-foot orthoses. The model has been incorporated into a freely available computer program to assist understanding of trainees and others interested in orthotics. © The International Society for Prosthetics and Orthotics 2014.

  12. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements

    PubMed Central

    Seth, Ajay; Matias, Ricardo; Veloso, António P.; Delp, Scott L.

    2016-01-01

    The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual’s anthropometry. We compared the model to “gold standard” bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761

  13. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements.

    PubMed

    Seth, Ajay; Matias, Ricardo; Veloso, António P; Delp, Scott L

    2016-01-01

    The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual's anthropometry. We compared the model to "gold standard" bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2 mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models.

  14. Posterior probability of linkage and maximal lod score.

    PubMed

    Génin, E; Martinez, M; Clerget-Darpoux, F

    1995-01-01

    To detect linkage between a trait and a marker, Morton (1955) proposed to calculate the lod score z(theta 1) at a given value theta 1 of the recombination fraction. If z(theta 1) reaches +3 then linkage is concluded. However, in practice, lod scores are calculated for different values of the recombination fraction between 0 and 0.5 and the test is based on the maximum value of the lod score Zmax. The impact of this deviation of the test on the probability that in fact linkage does not exist, when linkage was concluded, is documented here. This posterior probability of no linkage can be derived by using Bayes' theorem. It is less than 5% when the lod score at a predetermined theta 1 is used for the test. But, for a Zmax of +3, we showed that it can reach 16.4%. Thus, considering a composite alternative hypothesis instead of a single one decreases the reliability of the test. The reliability decreases rapidly when Zmax is less than +3. Given a Zmax of +2.5, there is a 33% chance that linkage does not exist. Moreover, the posterior probability depends not only on the value of Zmax but also jointly on the family structures and on the genetic model. For a given Zmax, the chance that linkage exists may then vary.

  15. A parameters optimization method for planar joint clearance model and its application for dynamics simulation of reciprocating compressor

    NASA Astrophysics Data System (ADS)

    Hai-yang, Zhao; Min-qiang, Xu; Jin-dong, Wang; Yong-bo, Li

    2015-05-01

    In order to improve the accuracy of dynamics response simulation for mechanism with joint clearance, a parameter optimization method for planar joint clearance contact force model was presented in this paper, and the optimized parameters were applied to the dynamics response simulation for mechanism with oversized joint clearance fault. By studying the effect of increased clearance on the parameters of joint clearance contact force model, the relation of model parameters between different clearances was concluded. Then the dynamic equation of a two-stage reciprocating compressor with four joint clearances was developed using Lagrange method, and a multi-body dynamic model built in ADAMS software was used to solve this equation. To obtain a simulated dynamic response much closer to that of experimental tests, the parameters of joint clearance model, instead of using the designed values, were optimized by genetic algorithms approach. Finally, the optimized parameters were applied to simulate the dynamics response of model with oversized joint clearance fault according to the concluded parameter relation. The dynamics response of experimental test verified the effectiveness of this application.

  16. LOW ACTIVATION JOINING OF SIC/SIC COMPOSITES FOR FUSION APPLICATIONS: MODELING DUAL-PHASE MICROSTRUCTURES AND DISSIMILAR MATERIAL JOINTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henager, Charles H.; Nguyen, Ba Nghiep; Kurtz, Richard J.

    2016-03-31

    Finite element continuum damage models (FE-CDM) have been developed to simulate and model dual-phase joints and cracked joints for improved analysis of SiC materials in nuclear environments. This report extends the analysis from the last reporting cycle by including results from dual-phase models and from cracked joint models.

  17. Modeling and analysis of passive dynamic bipedal walking with segmented feet and compliant joints

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Wang, Qi-Ning; Gao, Yue; Xie, Guang-Ming

    2012-10-01

    Passive dynamic walking has been developed as a possible explanation for the efficiency of the human gait. This paper presents a passive dynamic walking model with segmented feet, which makes the bipedal walking gait more close to natural human-like gait. The proposed model extends the simplest walking model with the addition of flat feet and torsional spring based compliance on ankle joints and toe joints, to achieve stable walking on a slope driven by gravity. The push-off phase includes foot rotations around the toe joint and around the toe tip, which shows a great resemblance to human normal walking. This paper investigates the effects of the segmented foot structure on bipedal walking in simulations. The model achieves satisfactory walking results on even or uneven slopes.

  18. A Parallel and Distributed Processing Model of Joint Attention, Social-Cognition and Autism

    PubMed Central

    Mundy, Peter; Sullivan, Lisa; Mastergeorge, Ann M.

    2009-01-01

    Scientific Abstract The impaired development of joint attention is a cardinal feature of autism. Therefore, understanding the nature of joint attention is a central to research on this disorder. Joint attention may be best defined in terms of an information processing system that begins to develop by 4–6 months of age. This system integrates the parallel processing of internal information about one’s own visual attention with external information about the visual attention of other people. This type of joint encoding of information about self and other attention requires the activation of a distributed anterior and posterior cortical attention network. Genetic regulation, in conjunction with self-organizing behavioral activity guides the development of functional connectivity in this network. With practice in infancy the joint processing of self-other attention becomes automatically engaged as an executive function. It can be argued that this executive joint-attention is fundamental to human learning, as well as the development of symbolic thought, social-cognition and social-competence throughout the life span. One advantage of this parallel and distributed processing model of joint attention (PDPM) is that it directly connects theory on social pathology to a range of phenomenon in autism associated with neural connectivity, constructivist and connectionist models of cognitive development, early intervention, activity-dependent gene expression, and atypical ocular motor control. PMID:19358304

  19. Musculoskeletal modelling of human ankle complex: Estimation of ankle joint moments.

    PubMed

    Jamwal, Prashant K; Hussain, Shahid; Tsoi, Yun Ho; Ghayesh, Mergen H; Xie, Sheng Quan

    2017-05-01

    A musculoskeletal model for the ankle complex is vital in order to enhance the understanding of neuro-mechanical control of ankle motions, diagnose ankle disorders and assess subsequent treatments. Motions at the human ankle and foot, however, are complex due to simultaneous movements at the two joints namely, the ankle joint and the subtalar joint. The musculoskeletal elements at the ankle complex, such as ligaments, muscles and tendons, have intricate arrangements and exhibit transient and nonlinear behaviour. This paper develops a musculoskeletal model of the ankle complex considering the biaxial ankle structure. The model provides estimates of overall mechanical characteristics (motion and moments) of ankle complex through consideration of forces applied along ligaments and muscle-tendon units. The dynamics of the ankle complex and its surrounding ligaments and muscle-tendon units is modelled and formulated into a state space model to facilitate simulations. A graphical user interface is also developed during this research in order to include the visual anatomical information by converting it to quantitative information on coordinates. Validation of the ankle model was carried out by comparing its outputs with those published in literature as well as with experimental data obtained from an existing parallel ankle rehabilitation robot. Qualitative agreement was observed between the model and measured data for both, the passive and active ankle motions during trials in terms of displacements and moments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A stochastic model for the probability of malaria extinction by mass drug administration.

    PubMed

    Pemberton-Ross, Peter; Chitnis, Nakul; Pothin, Emilie; Smith, Thomas A

    2017-09-18

    Mass drug administration (MDA) has been proposed as an intervention to achieve local extinction of malaria. Although its effect on the reproduction number is short lived, extinction may subsequently occur in a small population due to stochastic fluctuations. This paper examines how the probability of stochastic extinction depends on population size, MDA coverage and the reproduction number under control, R c . A simple compartmental model is developed which is used to compute the probability of extinction using probability generating functions. The expected time to extinction in small populations after MDA for various scenarios in this model is calculated analytically. The results indicate that mass drug administration (Firstly, R c must be sustained at R c  < 1.2 to avoid the rapid re-establishment of infections in the population. Secondly, the MDA must produce effective cure rates of >95% to have a non-negligible probability of successful elimination. Stochastic fluctuations only significantly affect the probability of extinction in populations of about 1000 individuals or less. The expected time to extinction via stochastic fluctuation is less than 10 years only in populations less than about 150 individuals. Clustering of secondary infections and of MDA distribution both contribute positively to the potential probability of success, indicating that MDA would most effectively be administered at the household level. There are very limited circumstances in which MDA will lead to local malaria elimination with a substantial probability.

  1. Implementation of a gait cycle loading into healthy and meniscectomised knee joint models with fibril-reinforced articular cartilage.

    PubMed

    Mononen, Mika E; Jurvelin, Jukka S; Korhonen, Rami K

    2015-01-01

    Computational models can be used to evaluate the functional properties of knee joints and possible risk locations within joints. Current models with fibril-reinforced cartilage layers do not provide information about realistic human movement during walking. This study aimed to evaluate stresses and strains within a knee joint by implementing load data from a gait cycle in healthy and meniscectomised knee joint models with fibril-reinforced cartilages. A 3D finite element model of a knee joint with cartilages and menisci was created from magnetic resonance images. The gait cycle data from varying joint rotations, translations and axial forces were taken from experimental studies and implemented into the model. Cartilage layers were modelled as a fibril-reinforced poroviscoelastic material with the menisci considered as a transversely isotropic elastic material. In the normal knee joint model, relatively high maximum principal stresses were specifically predicted to occur in the medial condyle of the knee joint during the loading response. Bilateral meniscectomy increased stresses, strains and fluid pressures in cartilage on the lateral side, especially during the first 50% of the stance phase of the gait cycle. During the entire stance phase, the superficial collagen fibrils modulated stresses of cartilage, especially in the medial tibial cartilage. The present computational model with a gait cycle and fibril-reinforced biphasic cartilage revealed time- and location-dependent differences in stresses, strains and fluid pressures occurring in cartilage during walking. The lateral meniscus was observed to have a more significant role in distributing loads across the knee joint than the medial meniscus, suggesting that meniscectomy might initiate a post-traumatic process leading to osteoarthritis at the lateral compartment of the knee joint.

  2. Prescribing joint co-ordinates during model preparation to improve inverse kinematic estimates of elbow joint angles.

    PubMed

    Wells, D J M; Alderson, J A; Dunne, J; Elliott, B C; Donnelly, C J

    2017-01-25

    To appropriately use inverse kinematic (IK) modelling for the assessment of human motion, a musculoskeletal model must be prepared 1) to match participant segment lengths (scaling) and 2) to align the model׳s virtual markers positions with known, experimentally derived kinematic marker positions (marker registration). The purpose of this study was to investigate whether prescribing joint co-ordinates during the marker registration process (within the modelling framework OpenSim) will improve IK derived elbow kinematics during an overhead sporting task. To test this, the upper limb kinematics of eight cricket bowlers were recorded during two testing sessions, with a different tester each session. The bowling trials were IK modelled twice: once with an upper limb musculoskeletal model prepared with prescribed participant specific co-ordinates during marker registration - MR PC - and once with the same model prepared without prescribed co-ordinates - MR; and by an established direct kinematic (DK) upper limb model. Whilst both skeletal model preparations had strong inter-tester repeatability (MR: Statistical Parametric Mapping (SPM1D)=0% different; MR PC : SPM1D=0% different), when compared with DK model elbow FE waveform estimates, IK estimates using the MR PC model (RMSD=5.2±2.0°, SPM1D=68% different) were in closer agreement than the estimates from the MR model (RMSD=44.5±18.5°, SPM1D=100% different). Results show that prescribing participant specific joint co-ordinates during the marker registration phase of model preparation increases the accuracy and repeatability of IK solutions when modelling overhead sporting tasks in OpenSim. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. A microstructurally based model of solder joints under conditions of thermomechanical fatigue

    NASA Astrophysics Data System (ADS)

    Frear, D. R.; Burchett, S. N.; Rashid, M. M.

    The thermomechanical fatigue failure of solder joints is increasingly becoming an important reliability issue. We present two computational methodologies that have been developed to predict the behavior of near eutectic Sn-Pb solder joints under fatigue conditions that are based on metallurgical tests as fundamental input for constitutive relations. The two-phase model mathematically predicts the heterogeneous coarsening behavior of near eutectic Sn-Pb solder. The finite element simulations from this model agree well with experimental thermomechanical fatigue tests. The simulations show that the presence of an initial heterogeneity in the solder microstructure could significantly degrade the fatigue lifetime. The single phase model is a computational technique that was developed to predict solder joint behavior using materials data for constitutive relation constants that could be determined through straightforward metallurgical experiments. A shear/torsion test sample was developed to impose strain in two different orientations. Materials constants were derived from these tests and the results showed an adequate fit to experimental results. The single-phase model could be very useful for conditions where microstructural evolution is not a dominant factor in fatigue.

  4. A Probabilistic Approach to Predict Thermal Fatigue Life for Ball Grid Array Solder Joints

    NASA Astrophysics Data System (ADS)

    Wei, Helin; Wang, Kuisheng

    2011-11-01

    Numerous studies of the reliability of solder joints have been performed. Most life prediction models are limited to a deterministic approach. However, manufacturing induces uncertainty in the geometry parameters of solder joints, and the environmental temperature varies widely due to end-user diversity, creating uncertainties in the reliability of solder joints. In this study, a methodology for accounting for variation in the lifetime prediction for lead-free solder joints of ball grid array packages (PBGA) is demonstrated. The key aspects of the solder joint parameters and the cyclic temperature range related to reliability are involved. Probabilistic solutions of the inelastic strain range and thermal fatigue life based on the Engelmaier model are developed to determine the probability of solder joint failure. The results indicate that the standard deviation increases significantly when more random variations are involved. Using the probabilistic method, the influence of each variable on the thermal fatigue life is quantified. This information can be used to optimize product design and process validation acceptance criteria. The probabilistic approach creates the opportunity to identify the root causes of failed samples from product fatigue tests and field returns. The method can be applied to better understand how variation affects parameters of interest in an electronic package design with area array interconnections.

  5. One-degree-of-freedom spherical model for the passive motion of the human ankle joint.

    PubMed

    Sancisi, Nicola; Baldisserri, Benedetta; Parenti-Castelli, Vincenzo; Belvedere, Claudio; Leardini, Alberto

    2014-04-01

    Mathematical modelling of mobility at the human ankle joint is essential for prosthetics and orthotic design. The scope of this study is to show that the ankle joint passive motion can be represented by a one-degree-of-freedom spherical motion. Moreover, this motion is modelled by a one-degree-of-freedom spherical parallel mechanism model, and the optimal pivot-point position is determined. Passive motion and anatomical data were taken from in vitro experiments in nine lower limb specimens. For each of these, a spherical mechanism, including the tibiofibular and talocalcaneal segments connected by a spherical pair and by the calcaneofibular and tibiocalcaneal ligament links, was defined from the corresponding experimental kinematics and geometry. An iterative procedure was used to optimize the geometry of the model, able to predict original experimental motion. The results of the simulations showed a good replication of the original natural motion, despite the numerous model assumptions and simplifications, with mean differences between experiments and predictions smaller than 1.3 mm (average 0.33 mm) for the three joint position components and smaller than 0.7° (average 0.32°) for the two out-of-sagittal plane rotations, once plotted versus the full flexion arc. The relevant pivot-point position after model optimization was found within the tibial mortise, but not exactly in a central location. The present combined experimental and modelling analysis of passive motion at the human ankle joint shows that a one degree-of-freedom spherical mechanism predicts well what is observed in real joints, although its computational complexity is comparable to the standard hinge joint model.

  6. Quantum Probability -- A New Direction for Modeling in Cognitive Science

    NASA Astrophysics Data System (ADS)

    Roy, Sisir

    2014-07-01

    Human cognition is still a puzzling issue in research and its appropriate modeling. It depends on how the brain behaves at that particular instance and identifies and responds to a signal among myriads of noises that are present in the surroundings (called external noise) as well as in the neurons themselves (called internal noise). Thus it is not surprising to assume that the functionality consists of various uncertainties, possibly a mixture of aleatory and epistemic uncertainties. It is also possible that a complicated pathway consisting of both types of uncertainties in continuum play a major role in human cognition. For more than 200 years mathematicians and philosophers have been using probability theory to describe human cognition. Recently in several experiments with human subjects, violation of traditional probability theory has been clearly revealed in plenty of cases. Literature survey clearly suggests that classical probability theory fails to model human cognition beyond a certain limit. While the Bayesian approach may seem to be a promising candidate to this problem, the complete success story of Bayesian methodology is yet to be written. The major problem seems to be the presence of epistemic uncertainty and its effect on cognition at any given time. Moreover the stochasticity in the model arises due to the unknown path or trajectory (definite state of mind at each time point), a person is following. To this end a generalized version of probability theory borrowing ideas from quantum mechanics may be a plausible approach. A superposition state in quantum theory permits a person to be in an indefinite state at each point of time. Such an indefinite state allows all the states to have the potential to be expressed at each moment. Thus a superposition state appears to be able to represent better, the uncertainty, ambiguity or conflict experienced by a person at any moment demonstrating that mental states follow quantum mechanics during perception and

  7. An energy-dependent numerical model for the condensation probability, γ j

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerby, Leslie Marie

    The “condensation” probability, γ j, is an important variable in the preequilibrium stage of nuclear spallation reactions. It represents the probability that p j excited nucleons (excitons) will “condense” to form complex particle type j in the excited residual nucleus. In addition, it has a significant impact on the emission width, or probability of emitting fragment type j from the residual nucleus. Previous formulations for γ j were energy-independent and valid for fragments up to 4He only. This paper explores the formulation of a new model for γ j, one which is energy-dependent and valid for up to 28Mg, andmore » which provides improved fits compared to experimental fragment spectra.« less

  8. An energy-dependent numerical model for the condensation probability, γ j

    DOE PAGES

    Kerby, Leslie Marie

    2016-12-09

    The “condensation” probability, γ j, is an important variable in the preequilibrium stage of nuclear spallation reactions. It represents the probability that p j excited nucleons (excitons) will “condense” to form complex particle type j in the excited residual nucleus. In addition, it has a significant impact on the emission width, or probability of emitting fragment type j from the residual nucleus. Previous formulations for γ j were energy-independent and valid for fragments up to 4He only. This paper explores the formulation of a new model for γ j, one which is energy-dependent and valid for up to 28Mg, andmore » which provides improved fits compared to experimental fragment spectra.« less

  9. Regional Permafrost Probability Modelling in the northwestern Cordillera, 59°N - 61°N, Canada

    NASA Astrophysics Data System (ADS)

    Bonnaventure, P. P.; Lewkowicz, A. G.

    2010-12-01

    High resolution (30 x 30 m) permafrost probability models were created for eight mountainous areas in the Yukon and northernmost British Columbia. Empirical-statistical modelling based on the Basal Temperature of Snow (BTS) method was used to develop spatial relationships. Model inputs include equivalent elevation (a variable that incorporates non-uniform temperature change with elevation), potential incoming solar radiation and slope. Probability relationships between predicted BTS and permafrost presence were developed for each area using late-summer physical observations in pits, or by using year-round ground temperature measurements. A high-resolution spatial model for the region has now been generated based on seven of the area models. Each was applied to the entire region, and their predictions were then blended based on a distance decay function from the model source area. The regional model is challenging to validate independently because there are few boreholes in the region. However, a comparison of results to a recently established inventory of rock glaciers for the Yukon suggests its validity because predicted permafrost probabilities were 0.8 or greater for almost 90% of these landforms. Furthermore, the regional model results have a similar spatial pattern to those modelled independently in the eighth area, although predicted probabilities using the regional model are generally higher. The regional model predicts that permafrost underlies about half of the non-glaciated terrain in the region, with probabilities increasing regionally from south to north and from east to west. Elevation is significant, but not always linked in a straightforward fashion because of weak or inverted trends in permafrost probability below treeline. Above treeline, however, permafrost probabilities increase and approach 1.0 in very high elevation areas throughout the study region. The regional model shows many similarities to previous Canadian permafrost maps (Heginbottom

  10. A Study of Knee Joint Kinematics and Mechanics using a Human FE Model.

    PubMed

    Kitagawa, Yuichi; Hasegawa, Junji; Yasuki, Tsuyoshi; Iwamoto, Masami; Miki, Kazuo

    2005-11-01

    Posterior translation of the tibia with respect to the femur can stretch the posterior cruciate ligament (PCL). Fifteen millimeters of relative displacement between the femur and tibia is known as the Injury Assessment Reference Value (IARV) for the PCL injury. Since the anterior protuberance of the tibial plateau can be the first site of contact when the knee is flexed, the knee bolster is generally designed with an inclined surface so as not to directly load the projection in frontal crashes. It should be noted, however, that the initial flexion angle of the occupant knee can vary among individuals and the knee flexion angle can change due to the occupant motion. The behavior of the tibial protuberance related to the knee flexion angle has not been described yet. The instantaneous angle of the knee joint at the timing of restraining the knee should be known to manage the geometry and functions of knee restraint devices. The purposes of this study are first to understand the kinematics of the knee joint during flexion, and second to characterize the mechanics of the knee joint under anterior-posterior loading. A finite element model of the knee joint, extracted from the Total Human Model for Safety (THUMS), was used to analyze the mechanism. The model was validated against kinematics and mechanical responses of the human knee joint. By tracking the relative positions and angles between the patella and the tibia in a knee flexing simulation, the magnitude of the tibial anterior protuberance was described as a function of the knee joint angle. The model revealed that the mechanics of the knee joint was characterized as a combination of stiffness of the patella-femur structure and the PCL It was also found that the magnitude of the tibial anterior protuberance determined the amount of initial stretch of the PCL in anterior-posterior loading. Based on the knee joint kinematics and mechanics, an interference boundary was proposed for different knee flexion angles, so

  11. Individual-tree probability of survival model for the Northeastern United States

    Treesearch

    Richard M. Teck; Donald E. Hilt

    1990-01-01

    Describes a distance-independent individual-free probability of survival model for the Northeastern United States. Survival is predicted using a sixparameter logistic function with species-specific coefficients. Coefficients are presented for 28 species groups. The model accounts for variability in annual survival due to species, tree size, site quality, and the tree...

  12. A Probability Model of Decompression Sickness at 4.3 Psia after Exercise Prebreathe

    NASA Technical Reports Server (NTRS)

    Conkin, Johnny; Gernhardt, Michael L.; Powell, Michael R.; Pollock, Neal

    2004-01-01

    Exercise PB can reduce the risk of decompression sickness on ascent to 4.3 psia when performed at the proper intensity and duration. Data are from seven tests. PB times ranged from 90 to 150 min. High intensity, short duration dual-cycle ergometry was done during the PB. This was done alone, or combined with intermittent low intensity exercise or periods of rest for the remaining PB. Nonambulating men and women performed light exercise from a semi-recumbent position at 4.3 psia for four hrs. The Research Model with age tested the probability that DCS increases with advancing age. The NASA Model with gender hypothesized that the probability of DCS increases if gender is female. Accounting for exercise and rest during PB with a variable half-time compartment for computed tissue N2 pressure advances our probability modeling of hypobaric DCS. Both models show that a small increase in exercise intensity during PB reduces the risk of DCS, and a larger increase in exercise intensity dramatically reduces risk. These models support the hypothesis that aerobic fitness is an important consideration for the risk of hypobaric DCS when exercise is performed during the PB.

  13. Non-terminal animal model of post-traumatic osteoarthritis induced by acute joint injury

    PubMed Central

    Boyce, Mary K.; Trumble, Troy N.; Carlson, Cathy S.; Groschen, Donna M.; Merritt, Kelly A.; Brown, Murray P.

    2013-01-01

    Objective Develop a non-terminal animal model of acute joint injury that demonstrates clinical and morphological evidence of early post-traumatic osteoarthritis (PTOA). Methods An osteochondral (OC) fragment was created arthroscopically in one metacarpophalangeal (MCP) joint of 11 horses and the contralateral joint was sham operated. Eleven additional horses served as unoperated controls. Every 2 weeks, force plate analysis, flexion response, joint circumference, and synovial effusion scores were recorded. At weeks 0 and 16, radiographs (all horses) and arthroscopic videos (OC injured and sham joints) were graded. At week 16, synovium and cartilage biopsies were taken arthroscopically from OC injured and sham joints for histologic evaluation and the OC fragment was removed. Results Osteochondral fragments were successfully created and horses were free of clinical lameness after fragment removal. Forelimb gait asymmetry was observed at week 2 (P=0.0012), while joint circumference (P<0.0001) and effusion scores (P<0.0001) were increased in injured limbs compared to baseline from weeks 2 to 16. Positive flexion response of injured limbs was noted at multiple time points. Capsular enthesophytes were seen radiographically in injured limbs. Articular cartilage damage was demonstrated arthroscopically as mild wear-lines and histologically as superficial zone chondrocyte death accompanied by mild proliferation. Synovial hyperemia and fibrosis were present at the site of OC injury. Conclusion Acute OC injury to the MCP joint resulted in clinical, imaging, and histologic changes in cartilage and synovium characteristic of early PTOA. This model will be useful for defining biomarkers of early osteoarthritis and for monitoring response to therapy and surgery. PMID:23467035

  14. Modeling knee joint endoprosthesis mode of deformation

    NASA Astrophysics Data System (ADS)

    Skeeba, V. Yu; Ivancivsky, V. V.

    2018-03-01

    The purpose of the work was to define the efficient design of the endoprosthesis, working in a multiple-cycle loading environment. Methodology and methods: triangulated surfaces of the base contact surfaces of endoprosthesis butt elements have been created using the PowerShape and SolidWorks software functional environment, and the assemblies of the possible combinations of the knee joint prosthetic designs have been prepared. The mode of deformation modeling took place in the multipurpose program complex ANSYS. Results and discussion: as a result of the numerical modeling, the following data were obtained for each of the developed knee joint versions: the distribution fields of absolute (total) and relative deformations; equivalent stress distribution fields; fatigue strength coefficient distribution fields. In the course of the studies, the following efficient design assembly has been established: 1) Ti-Al-V alloy composite femoral component with polymer inserts; 2) ceramic liners of the compound separator; 3) a Ti-Al-V alloy composite tibial component. The fatigue strength coefficient for the femoral component is 4.2; for the femoral component polymer inserts is 1.2; for the ceramic liners of the compound separator is 3.1; for the tibial component is 2.7. This promising endoprosthesis structure is recommended for further design and technological development.

  15. Modeling and simulating the neuromuscular mechanisms regulating ankle and knee joint stiffness during human locomotion.

    PubMed

    Sartori, Massimo; Maculan, Marco; Pizzolato, Claudio; Reggiani, Monica; Farina, Dario

    2015-10-01

    This work presents an electrophysiologically and dynamically consistent musculoskeletal model to predict stiffness in the human ankle and knee joints as derived from the joints constituent biological tissues (i.e., the spanning musculotendon units). The modeling method we propose uses electromyography (EMG) recordings from 13 muscle groups to drive forward dynamic simulations of the human leg in five healthy subjects during overground walking and running. The EMG-driven musculoskeletal model estimates musculotendon and resulting joint stiffness that is consistent with experimental EMG data as well as with the experimental joint moments. This provides a framework that allows for the first time observing 1) the elastic interplay between the knee and ankle joints, 2) the individual muscle contribution to joint stiffness, and 3) the underlying co-contraction strategies. It provides a theoretical description of how stiffness modulates as a function of muscle activation, fiber contraction, and interacting tendon dynamics. Furthermore, it describes how this differs from currently available stiffness definitions, including quasi-stiffness and short-range stiffness. This work offers a theoretical and computational basis for describing and investigating the neuromuscular mechanisms underlying human locomotion. Copyright © 2015 the American Physiological Society.

  16. Modeling and simulating the neuromuscular mechanisms regulating ankle and knee joint stiffness during human locomotion

    PubMed Central

    Maculan, Marco; Pizzolato, Claudio; Reggiani, Monica; Farina, Dario

    2015-01-01

    This work presents an electrophysiologically and dynamically consistent musculoskeletal model to predict stiffness in the human ankle and knee joints as derived from the joints constituent biological tissues (i.e., the spanning musculotendon units). The modeling method we propose uses electromyography (EMG) recordings from 13 muscle groups to drive forward dynamic simulations of the human leg in five healthy subjects during overground walking and running. The EMG-driven musculoskeletal model estimates musculotendon and resulting joint stiffness that is consistent with experimental EMG data as well as with the experimental joint moments. This provides a framework that allows for the first time observing 1) the elastic interplay between the knee and ankle joints, 2) the individual muscle contribution to joint stiffness, and 3) the underlying co-contraction strategies. It provides a theoretical description of how stiffness modulates as a function of muscle activation, fiber contraction, and interacting tendon dynamics. Furthermore, it describes how this differs from currently available stiffness definitions, including quasi-stiffness and short-range stiffness. This work offers a theoretical and computational basis for describing and investigating the neuromuscular mechanisms underlying human locomotion. PMID:26245321

  17. [Self-assembly tissue engineering fibrocartilage model of goat temporomandibular joint disc].

    PubMed

    Kang, Hong; Li, Zhen-Qiang; Bi, Yan-Da

    2011-06-01

    To construct self-assembly fibrocartilage model of goat temporomandibular joint disc and observe the biological characteristics of the self-assembled fibrocartilage constructs, further to provide a basis for tissue engineering of the temporomandibular joint disc and other fibrocartilage. Cells from temporomandibular joint discs of goats were harvested and cultured. 5.5 x 10(6) cells were seeded in each agarose well with diameter 5 mm x depth 10 mm, daily replace of medium, cultured for 2 weeks. One day after seeding, goat temporomandibular joint disc cells in agarose wells were gathered and began to self-assemble into a disc-shaped base, then gradually turned into a round shape. When cultured for 2 weeks, hematoxylin-eosin staining was conducted and observed that cells were round and wrapped around by the matrix. Positive Safranin-O/fast green staining for glycosaminoglycans was observed throughout the entire constructs, and picro-sirius red staining was examined and distribution of numerous type I collagen was found. Immunohistochemistry staining demonstrated brown yellow particles in cytoplasm and around extracellular matrix, which showed self-assembly construct can produce type I collagen as native temporomandibular joint disc tissue. Production of extracellular matrix in self-assembly construct as native temporomandibular joint disc tissue indicates that the use of agarose wells to construct engineered temporomandibular joint disc will be possible and practicable.

  18. Periprosthetic Joint Infections: Clinical and Bench Research

    PubMed Central

    Legout, Laurence; Senneville, Eric

    2013-01-01

    Prosthetic joint infection is a devastating complication with high morbidity and substantial cost. The incidence is low but probably underestimated. Despite a significant basic and clinical research in this field, many questions concerning the definition of prosthetic infection as well the diagnosis and the management of these infections remained unanswered. We review the current literature about the new diagnostic methods, the management and the prevention of prosthetic joint infections. PMID:24288493

  19. A model-based approach to stabilizing crutch supported paraplegic standing by artificial hip joint stiffness.

    PubMed

    van der Spek, Jaap H; Veltink, Peter H; Hermens, Hermie J; Koopman, Bart F J M; Boom, Herman B K

    2003-12-01

    The prerequisites for stable crutch supported standing were analyzed in this paper. For this purpose, a biomechanical model of crutch supported paraplegic stance was developed assuming the patient was standing with extended knees. When using crutches during stance, the crutches will put a position constraint on the shoulder, thus reducing the number of degrees of freedom. Additional hip-joint stiffness was applied to stabilize the hip joint and, therefore, to stabilize stance. The required hip-joint stiffness for changing crutch placement and hip-joint offset angle was studied under static and dynamic conditions. Modeling results indicate that, by using additional hip-joint stiffness, stable crutch supported paraplegic standing can be achieved, both under static as well as dynamic situations. The static equilibrium postures and the stability under perturbations were calculated to be dependent on crutch placement and stiffness applied. However, postures in which the hip joint was in extension (C postures) appeared to the most stable postures. Applying at least 60 N x m/rad hip-joint stiffness gave stable equilibrium postures in all cases. Choosing appropriate hip-joint offset angles, the static equilibrium postures changed to more erect postures, without causing instability or excessive arm forces to occur.

  20. Macro-level vulnerable road users crash analysis: A Bayesian joint modeling approach of frequency and proportion.

    PubMed

    Cai, Qing; Abdel-Aty, Mohamed; Lee, Jaeyoung

    2017-10-01

    This study aims at contributing to the literature on pedestrian and bicyclist safety by building on the conventional count regression models to explore exogenous factors affecting pedestrian and bicyclist crashes at the macroscopic level. In the traditional count models, effects of exogenous factors on non-motorist crashes were investigated directly. However, the vulnerable road users' crashes are collisions between vehicles and non-motorists. Thus, the exogenous factors can affect the non-motorist crashes through the non-motorists and vehicle drivers. To accommodate for the potentially different impact of exogenous factors we convert the non-motorist crash counts as the product of total crash counts and proportion of non-motorist crashes and formulate a joint model of the negative binomial (NB) model and the logit model to deal with the two parts, respectively. The formulated joint model is estimated using non-motorist crash data based on the Traffic Analysis Districts (TADs) in Florida. Meanwhile, the traditional NB model is also estimated and compared with the joint model. The result indicates that the joint model provides better data fit and can identify more significant variables. Subsequently, a novel joint screening method is suggested based on the proposed model to identify hot zones for non-motorist crashes. The hot zones of non-motorist crashes are identified and divided into three types: hot zones with more dangerous driving environment only, hot zones with more hazardous walking and cycling conditions only, and hot zones with both. It is expected that the joint model and screening method can help decision makers, transportation officials, and community planners to make more efficient treatments to proactively improve pedestrian and bicyclist safety. Published by Elsevier Ltd.

  1. Quasi-probabilities in conditioned quantum measurement and a geometric/statistical interpretation of Aharonov's weak value

    NASA Astrophysics Data System (ADS)

    Lee, Jaeha; Tsutsui, Izumi

    2017-05-01

    We show that the joint behavior of an arbitrary pair of (generally noncommuting) quantum observables can be described by quasi-probabilities, which are an extended version of the standard probabilities used for describing the outcome of measurement for a single observable. The physical situations that require these quasi-probabilities arise when one considers quantum measurement of an observable conditioned by some other variable, with the notable example being the weak measurement employed to obtain Aharonov's weak value. Specifically, we present a general prescription for the construction of quasi-joint probability (QJP) distributions associated with a given combination of observables. These QJP distributions are introduced in two complementary approaches: one from a bottom-up, strictly operational construction realized by examining the mathematical framework of the conditioned measurement scheme, and the other from a top-down viewpoint realized by applying the results of the spectral theorem for normal operators and their Fourier transforms. It is then revealed that, for a pair of simultaneously measurable observables, the QJP distribution reduces to the unique standard joint probability distribution of the pair, whereas for a noncommuting pair there exists an inherent indefiniteness in the choice of such QJP distributions, admitting a multitude of candidates that may equally be used for describing the joint behavior of the pair. In the course of our argument, we find that the QJP distributions furnish the space of operators in the underlying Hilbert space with their characteristic geometric structures such that the orthogonal projections and inner products of observables can be given statistical interpretations as, respectively, “conditionings” and “correlations”. The weak value Aw for an observable A is then given a geometric/statistical interpretation as either the orthogonal projection of A onto the subspace generated by another observable B, or

  2. An extended car-following model considering random safety distance with different probabilities

    NASA Astrophysics Data System (ADS)

    Wang, Jufeng; Sun, Fengxin; Cheng, Rongjun; Ge, Hongxia; Wei, Qi

    2018-02-01

    Because of the difference in vehicle type or driving skill, the driving strategy is not exactly the same. The driving speeds of the different vehicles may be different for the same headway. Since the optimal velocity function is just determined by the safety distance besides the maximum velocity and headway, an extended car-following model accounting for random safety distance with different probabilities is proposed in this paper. The linear stable condition for this extended traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulting from multiple safety distance in the optimal velocity function. The cases of multiple types of safety distances selected with different probabilities are presented. Numerical results show that the traffic flow with multiple safety distances with different probabilities will be more unstable than that with single type of safety distance, and will result in more stop-and-go phenomena.

  3. Biomechanical validation of finite element models for two silicone metacarpophalangeal joint implants.

    PubMed

    Hussein, A I; Stranart, J C; Meguid, S A; Bogoch, E R

    2011-02-01

    Silicone implants are used for prosthetic arthroplasty of metacarpophalangeal (MCP) joints severely damaged by rheumatoid arthritis. Different silicone elastomer MCP implant designs have been developed, including the Swanson and the NeuFlex implants. The goal of this study was to compare the in vitro mechanical behavior of Swanson and NeuFlex MCP joint implants. Three-dimensional (3D) finite element (FE) models of the silicone implants were modeled using the commercial software ANSYS and subjected to angular displacement from 0 deg to 90 deg. FE models were validated using mechanical tests of implants incrementally bent from 0 deg to 90 deg in a joint simulator. Swanson size 2 and 4 implants were compared with NeuFlex size 10 and 30 implants, respectively. Good agreement was observed throughout the range of motion for the flexion bending moment derived from 3D FE models and mechanical tests. From 30 deg to 90 deg, the Swanson 2 demonstrated a greater resistance to deformation than the NeuFlex 10 and required a greater bending moment for joint flexion. For larger implant sizes, the NeuFlex 30 had a steeper moment-displacement curve, but required a lower moment than the Swanson 4, due to implant preflexion. On average, the stress generated at the implant hinge from 30 deg to 90 deg was lower in the NeuFlex than in the Swanson. On average, starting from the neutral position of 30 deg for the preflexed NeuFlex implant, higher moments were required to extend the NeuFlex implants to 0 deg compared with the Swanson implants, which returned spontaneously to resting position. Implant toggling within the medullary canals was less in the NeuFlex than in the Swanson. The differential performance of these implants may be useful in implant selection based on the preoperative condition(s) of the joint and specific patient functional needs.

  4. Morphogen-based simulation model of ray growth and joint patterning during fin development and regeneration.

    PubMed

    Rolland-Lagan, Anne-Gaëlle; Paquette, Mathieu; Tweedle, Valerie; Akimenko, Marie-Andrée

    2012-03-01

    The fact that some organisms are able to regenerate organs of the correct shape and size following amputation is particularly fascinating, but the mechanism by which this occurs remains poorly understood. The zebrafish (Danio rerio) caudal fin has emerged as a model system for the study of bone development and regeneration. The fin comprises 16 to 18 bony rays, each containing multiple joints along its proximodistal axis that give rise to segments. Experimental observations on fin ray growth, regeneration and joint formation have been described, but no unified theory has yet been put forward to explain how growth and joint patterns are controlled. We present a model for the control of fin ray growth during development and regeneration, integrated with a model for joint pattern formation, which is in agreement with published, as well as new, experimental data. We propose that fin ray growth and joint patterning are coordinated through the interaction of three morphogens. When the model is extended to incorporate multiple rays across the fin, it also accounts for how the caudal fin acquires its shape during development, and regains its correct size and shape following amputation.

  5. Protein single-model quality assessment by feature-based probability density functions.

    PubMed

    Cao, Renzhi; Cheng, Jianlin

    2016-04-04

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.

  6. Bounded influence function based inference in joint modelling of ordinal partial linear model and accelerated failure time model.

    PubMed

    Chakraborty, Arindom

    2016-12-01

    A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.

  7. Dual-joint modeling for estimation of total knee replacement contact forces during locomotion.

    PubMed

    Hast, Michael W; Piazza, Stephen J

    2013-02-01

    Model-based estimation of in vivo contact forces arising between components of a total knee replacement is challenging because such forces depend upon accurate modeling of muscles, tendons, ligaments, contact, and multibody dynamics. Here we describe an approach to solving this problem with results that are tested by comparison to knee loads measured in vivo for a single subject and made available through the Grand Challenge Competition to Predict in vivo Tibiofemoral Loads. The approach makes use of a "dual-joint" paradigm in which the knee joint is alternately represented by (1) a ball-joint knee for inverse dynamic computation of required muscle controls and (2) a 12 degree-of-freedom (DOF) knee with elastic foundation contact at the tibiofemoral and patellofemoral articulations for forward dynamic integration. Measured external forces and kinematics were applied as a feedback controller and static optimization attempted to track measured knee flexion angles and electromyographic (EMG) activity. The resulting simulations showed excellent tracking of knee flexion (average RMS error of 2.53 deg) and EMG (muscle activations within ±10% envelopes of normalized measured EMG signals). Simulated tibiofemoral contact forces agreed qualitatively with measured contact forces, but their RMS errors were approximately 25% of the peak measured values. These results demonstrate the potential of a dual-joint modeling approach to predict joint contact forces from kinesiological data measured in the motion laboratory. It is anticipated that errors in the estimation of contact force will be reduced as more accurate subject-specific models of muscles and other soft tissues are developed.

  8. Bootstrap imputation with a disease probability model minimized bias from misclassification due to administrative database codes.

    PubMed

    van Walraven, Carl

    2017-04-01

    Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Probability models for growth and aflatoxin B1 production as affected by intraspecies variability in Aspergillus flavus.

    PubMed

    Aldars-García, Laila; Berman, María; Ortiz, Jordi; Ramos, Antonio J; Marín, Sonia

    2018-06-01

    The probability of growth and aflatoxin B 1 (AFB 1 ) production of 20 isolates of Aspergillus flavus were studied using a full factorial design with eight water activity levels (0.84-0.98 a w ) and six temperature levels (15-40 °C). Binary data obtained from growth studies were modelled using linear logistic regression analysis as a function of temperature, water activity and time for each isolate. In parallel, AFB 1 was extracted at different times from newly formed colonies (up to 20 mm in diameter). Although a total of 950 AFB 1 values over time for all conditions studied were recorded, they were not considered to be enough to build probability models over time, and therefore, only models at 30 days were built. The confidence intervals of the regression coefficients of the probability of growth models showed some differences among the 20 growth models. Further, to assess the growth/no growth and AFB 1 /no- AFB 1 production boundaries, 0.05 and 0.5 probabilities were plotted at 30 days for all of the isolates. The boundaries for growth and AFB 1 showed that, in general, the conditions for growth were wider than those for AFB 1 production. The probability of growth and AFB 1 production seemed to be less variable among isolates than AFB 1 accumulation. Apart from the AFB 1 production probability models, using growth probability models for AFB 1 probability predictions could be, although conservative, a suitable alternative. Predictive mycology should include a number of isolates to generate data to build predictive models and take into account the genetic diversity of the species and thus make predictions as similar as possible to real fungal food contamination. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Mortality Probability Model III and Simplified Acute Physiology Score II

    PubMed Central

    Vasilevskis, Eduard E.; Kuzniewicz, Michael W.; Cason, Brian A.; Lane, Rondall K.; Dean, Mitzi L.; Clay, Ted; Rennie, Deborah J.; Vittinghoff, Eric; Dudley, R. Adams

    2009-01-01

    Background: To develop and compare ICU length-of-stay (LOS) risk-adjustment models using three commonly used mortality or LOS prediction models. Methods: Between 2001 and 2004, we performed a retrospective, observational study of 11,295 ICU patients from 35 hospitals in the California Intensive Care Outcomes Project. We compared the accuracy of the following three LOS models: a recalibrated acute physiology and chronic health evaluation (APACHE) IV-LOS model; and models developed using risk factors in the mortality probability model III at zero hours (MPM0) and the simplified acute physiology score (SAPS) II mortality prediction model. We evaluated models by calculating the following: (1) grouped coefficients of determination; (2) differences between observed and predicted LOS across subgroups; and (3) intraclass correlations of observed/expected LOS ratios between models. Results: The grouped coefficients of determination were APACHE IV with coefficients recalibrated to the LOS values of the study cohort (APACHE IVrecal) [R2 = 0.422], mortality probability model III at zero hours (MPM0 III) [R2 = 0.279], and simplified acute physiology score (SAPS II) [R2 = 0.008]. For each decile of predicted ICU LOS, the mean predicted LOS vs the observed LOS was significantly different (p ≤ 0.05) for three, two, and six deciles using APACHE IVrecal, MPM0 III, and SAPS II, respectively. Plots of the predicted vs the observed LOS ratios of the hospitals revealed a threefold variation in LOS among hospitals with high model correlations. Conclusions: APACHE IV and MPM0 III were more accurate than SAPS II for the prediction of ICU LOS. APACHE IV is the most accurate and best calibrated model. Although it is less accurate, MPM0 III may be a reasonable option if the data collection burden or the treatment effect bias is a consideration. PMID:19363210

  11. Numerical modeling of friction welding of bi-metal joints for electrical applications

    NASA Astrophysics Data System (ADS)

    Velu, P. Shenbaga; Hynes, N. Rajesh Jesudoss

    2018-05-01

    In the manufacturing industries, and more especially in electrical engineering applications, the usage of non-ferrous materials plays a vital role. Today's engineering applications relies upon some of the significant properties such as a good corrosion resistance, mechanical properties, good heat conductivity and higher electrical conductivity. Copper-aluminum bi-metal joint is one such combination that meets the demands requirements for electrical applications. In this work, the numerical simulation of AA 6061 T6 alloy/Copper was carried out under joining conditions. By using this developed model, the temperature distribution along the length of the dissimilar joint is predicted and the time-temperature profile has also been generated. Besides, a Finite Element Model has been developed by using the numerical simulation Tool "ABAQUS". This developed FEM is helpful in predicting various output parameters during friction welding of this dissimilar joint combination.

  12. Ankle Joint Intrinsic Dynamics is More Complex than a Mass-Spring-Damper Model.

    PubMed

    Sobhani Tehrani, Ehsan; Jalaleddini, Kian; Kearney, Robert E

    2017-09-01

    This paper describes a new small signal parametric model of ankle joint intrinsic mechanics in normal subjects. We found that intrinsic ankle mechanics is a third-order system and the second-order mass-spring-damper model, referred to as IBK, used by many researchers in the literature cannot adequately represent ankle dynamics at all frequencies in a number of important tasks. This was demonstrated using experimental data from five healthy subjects with no voluntary muscle contraction and at seven ankle positions covering the range of motion. We showed that the difference between the new third-order model and the conventional IBK model increased from dorsi to plantarflexed position. The new model was obtained using a multi-step identification procedure applied to experimental input/output data of the ankle joint. The procedure first identifies a non-parametric model of intrinsic joint stiffness where ankle position is the input and torque is the output. Then, in several steps, the model is converted into a continuous-time transfer function of ankle compliance, which is the inverse of stiffness. Finally, we showed that the third-order model is indeed structurally consistent with agonist-antagonist musculoskeletal structure of human ankle, which is not the case for the IBK model.

  13. Multilayer Joint Gait-Pose Manifolds for Human Gait Motion Modeling.

    PubMed

    Ding, Meng; Fan, Guolian

    2015-11-01

    We present new multilayer joint gait-pose manifolds (multilayer JGPMs) for complex human gait motion modeling, where three latent variables are defined jointly in a low-dimensional manifold to represent a variety of body configurations. Specifically, the pose variable (along the pose manifold) denotes a specific stage in a walking cycle; the gait variable (along the gait manifold) represents different walking styles; and the linear scale variable characterizes the maximum stride in a walking cycle. We discuss two kinds of topological priors for coupling the pose and gait manifolds, i.e., cylindrical and toroidal, to examine their effectiveness and suitability for motion modeling. We resort to a topologically-constrained Gaussian process (GP) latent variable model to learn the multilayer JGPMs where two new techniques are introduced to facilitate model learning under limited training data. First is training data diversification that creates a set of simulated motion data with different strides. Second is the topology-aware local learning to speed up model learning by taking advantage of the local topological structure. The experimental results on the Carnegie Mellon University motion capture data demonstrate the advantages of our proposed multilayer models over several existing GP-based motion models in terms of the overall performance of human gait motion modeling.

  14. Canine stifle joint biomechanics associated with tibial plateau leveling osteotomy predicted by use of a computer model.

    PubMed

    Brown, Nathan P; Bertocci, Gina E; Marcellin-Little, Denis J

    2014-07-01

    To evaluate effects of tibial plateau leveling osteotomy (TPLO) on canine stifle joint biomechanics in a cranial cruciate ligament (CrCL)-deficient stifle joint by use of a 3-D computer model simulating the stance phase of gait and to compare biomechanics in TPLO-managed, CrCL-intact, and CrCL-deficient stifle joints. Computer simulations of the pelvic limb of a Golden Retriever. A previously developed computer model of the canine pelvic limb was used to simulate TPLO stabilization to achieve a tibial plateau angle (TPA) of 5° (baseline value) in a CrCL-deficient stifle joint. Sensitivity analysis was conducted for tibial fragment rotation of 13° to -3°. Ligament loads, relative tibial translation, and relative tibial rotation were determined and compared with values for CrCL-intact and CrCL-deficient stifle joints. TPLO with a 5° TPA converted cranial tibial translation to caudal tibial translation and increased loads placed on the remaining stifle joint ligaments, compared with results for a CrCL-intact stifle joint. Lateral collateral ligament load was similar, medial collateral ligament load increased, and caudal cruciate ligament load decreased after TPLO, compared with loads for a CrCL-deficient stifle joint. Relative tibial rotation after TPLO was similar to that of a CrCL-deficient stifle joint. Stifle joint biomechanics were affected by TPLO fragment rotation. In the model, stifle joint biomechanics were partially improved after TPLO, compared with CrCL-deficient stifle joint biomechanics, but TPLO did not fully restore CrCL-intact stifle joint biomechanics. Overrotation of the tibial fragment negatively influenced stifle joint biomechanics by increasing caudal tibial translation.

  15. Non-terminal animal model of post-traumatic osteoarthritis induced by acute joint injury.

    PubMed

    Boyce, M K; Trumble, T N; Carlson, C S; Groschen, D M; Merritt, K A; Brown, M P

    2013-05-01

    Develop a non-terminal animal model of acute joint injury that demonstrates clinical and morphological evidence of early post-traumatic osteoarthritis (PTOA). An osteochondral (OC) fragment was created arthroscopically in one metacarpophalangeal (MCP) joint of 11 horses and the contralateral joint was sham operated. Eleven additional horses served as unoperated controls. Every 2 weeks, force plate analysis, flexion response, joint circumference, and synovial effusion scores were recorded. At weeks 0 and 16, radiographs (all horses) and arthroscopic videos (OC injured and sham joints) were graded. At week 16, synovium and cartilage biopsies were taken arthroscopically from OC injured and sham joints for histologic evaluation and the OC fragment was removed. OC fragments were successfully created and horses were free of clinical lameness after fragment removal. Forelimb gait asymmetry was observed at week 2 (P = 0.0012), while joint circumference (P < 0.0001) and effusion scores (P < 0.0001) were increased in injured limbs compared to baseline from weeks 2 to 16. Positive flexion response of injured limbs was noted at multiple time points. Capsular enthesophytes were seen radiographically in injured limbs. Articular cartilage damage was demonstrated arthroscopically as mild wear-lines and histologically as superficial zone chondrocyte death accompanied by mild proliferation. Synovial hyperemia and fibrosis were present at the site of OC injury. Acute OC injury to the MCP joint resulted in clinical, imaging, and histologic changes in cartilage and synovium characteristic of early PTOA. This model will be useful for defining biomarkers of early osteoarthritis and for monitoring response to therapy and surgery. Copyright © 2013 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  16. A microstructurally based model of solder joints under conditions of thermomechanical fatigue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frear, D.R.; Burchett, S.N.; Rashid, M.M.

    The thermomechanical fatigue failure of solder joints in increasingly becoming an important reliability issue. In this paper we present two computational methodologies that have been developed to predict the behavior of near eutectic Sn-Pb solder joints under fatigue conditions that are based on metallurgical tests as fundamental input for constitutive relations. The two-phase model mathematically predicts the heterogeneous coarsening behavior of near eutectic Sn-Pb solder. The finite element simulations from this model agree well with experimental thermomechanical fatigue tests. The simulations show that the presence of an initial heterogeneity in the solder microstructure could significantly degrade the fatigue lifetime. Themore » single phase model is a computational technique that was developed to predict solder joint behavior using materials data for constitutive relation constants that could be determined through straightforward metallurgical experiments. A shear/torsion test sample was developed to impose strain in two different orientations. Materials constants were derived from these tests and the results showed an adequate fit to experimental results. The single-phase model could be very useful for conditions where microstructural evolution is not a dominant factor in fatigue.« less

  17. A robust design mark-resight abundance estimator allowing heterogeneity in resighting probabilities

    USGS Publications Warehouse

    McClintock, B.T.; White, Gary C.; Burnham, K.P.

    2006-01-01

    This article introduces the beta-binomial estimator (BBE), a closed-population abundance mark-resight model combining the favorable qualities of maximum likelihood theory and the allowance of individual heterogeneity in sighting probability (p). The model may be parameterized for a robust sampling design consisting of multiple primary sampling occasions where closure need not be met between primary occasions. We applied the model to brown bear data from three study areas in Alaska and compared its performance to the joint hypergeometric estimator (JHE) and Bowden's estimator (BOWE). BBE estimates suggest heterogeneity levels were non-negligible and discourage the use of JHE for these data. Compared to JHE and BOWE, confidence intervals were considerably shorter for the AICc model-averaged BBE. To evaluate the properties of BBE relative to JHE and BOWE when sample sizes are small, simulations were performed with data from three primary occasions generated under both individual heterogeneity and temporal variation in p. All models remained consistent regardless of levels of variation in p. In terms of precision, the AICc model-averaged BBE showed advantages over JHE and BOWE when heterogeneity was present and mean sighting probabilities were similar between primary occasions. Based on the conditions examined, BBE is a reliable alternative to JHE or BOWE and provides a framework for further advances in mark-resight abundance estimation. ?? 2006 American Statistical Association and the International Biometric Society.

  18. Influence of weak hip abductor muscles on joint contact forces during normal walking: probabilistic modeling analysis.

    PubMed

    Valente, Giordano; Taddei, Fulvia; Jonkers, Ilse

    2013-09-03

    The weakness of hip abductor muscles is related to lower-limb joint osteoarthritis, and joint overloading may increase the risk for disease progression. The relationship between muscle strength, structural joint deterioration and joint loading makes the latter an important parameter in the study of onset and follow-up of the disease. Since the relationship between hip abductor weakness and joint loading still remains an open question, the purpose of this study was to adopt a probabilistic modeling approach to give insights into how the weakness of hip abductor muscles, in the extent to which normal gait could be unaltered, affects ipsilateral joint contact forces. A generic musculoskeletal model was scaled to each healthy subject included in the study, and the maximum force-generating capacity of each hip abductor muscle in the model was perturbed to evaluate how all physiologically possible configurations of hip abductor weakness affected the joint contact forces during walking. In general, the muscular system was able to compensate for abductor weakness. The reduced force-generating capacity of the abductor muscles affected joint contact forces to a mild extent, with 50th percentile mean differences up to 0.5 BW (maximum 1.7 BW). There were greater increases in the peak knee joint loads than in loads at the hip or ankle. Gluteus medius, particularly the anterior compartment, was the abductor muscle with the most influence on hip and knee loads. Further studies should assess if these increases in joint loading may affect initiation and progression of osteoarthritis. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Experimental Investigation of the Influence of Joint Geometric Configurations on the Mechanical Properties of Intermittent Jointed Rock Models Under Cyclic Uniaxial Compression

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Dai, Feng; Fan, Pengxian; Xu, Nuwen; Dong, Lu

    2017-06-01

    Intermittent joints in rock mass are quite sensitive to cyclic loading conditions. Understanding the fatigue mechanical properties of jointed rocks is beneficial for rational design and stability analysis of rock engineering projects. This study experimentally investigated the influences of joint geometry (i.e., dip angle, persistency, density and spacing) on the fatigue mechanism of synthetic jointed rock models. Our results revealed that the stress-strain curve of jointed rock under cyclic loadings is dominated by its curve under monotonic uniaxial loadings; the terminal strain in fatigue curve is equal to the post-peak strain corresponding to the maximum cyclic stress in the monotonic stress-strain curve. The four joint geometrical parameters studied significantly affect the fatigue properties of jointed rocks, including the irreversible strains, the fatigue deformation modulus, the energy evolution, the damage variable and the crack coalescence patterns. The higher the values of the geometrical parameters, the lower the elastic energy stores in this jointed rock, the higher the fatigue damage accumulates in the first few cycles, and the lower the fatigue life. The elastic energy has certain storage limitation, at which the fatigue failure occurs. Two basic micro-cracks, i.e., tensile wing crack and shear crack, are observed in cyclic loading and unloading tests, which are controlled principally by joint dip angle and persistency. In general, shear cracks only occur in the jointed rock with higher dip angle or higher persistency, and the jointed rock is characterized by lower fatigue strength, larger damage variable and lower fatigue life.

  20. Tissue material properties and computational modelling of the human tibiofemoral joint: a critical review

    PubMed Central

    Akhtar, Riaz; Comerford, Eithne J.; Bates, Karl T.

    2018-01-01

    Understanding how structural and functional alterations of individual tissues impact on whole-joint function is challenging, particularly in humans where direct invasive experimentation is difficult. Finite element (FE) computational models produce quantitative predictions of the mechanical and physiological behaviour of multiple tissues simultaneously, thereby providing a means to study changes that occur through healthy ageing and disease such as osteoarthritis (OA). As a result, significant research investment has been placed in developing such models of the human knee. Previous work has highlighted that model predictions are highly sensitive to the various inputs used to build them, particularly the mathematical definition of material properties of biological tissues. The goal of this systematic review is two-fold. First, we provide a comprehensive summation and evaluation of existing linear elastic material property data for human tibiofemoral joint tissues, tabulating numerical values as a reference resource for future studies. Second, we review efforts to model tibiofemoral joint mechanical behaviour through FE modelling with particular focus on how studies have sourced tissue material properties. The last decade has seen a renaissance in material testing fuelled by development of a variety of new engineering techniques that allow the mechanical behaviour of both soft and hard tissues to be characterised at a spectrum of scales from nano- to bulk tissue level. As a result, there now exists an extremely broad range of published values for human tibiofemoral joint tissues. However, our systematic review highlights gaps and ambiguities that mean quantitative understanding of how tissue material properties alter with age and OA is limited. It is therefore currently challenging to construct FE models of the knee that are truly representative of a specific age or disease-state. Consequently, recent tibiofemoral joint FE models have been highly generic in terms of

  1. Tissue material properties and computational modelling of the human tibiofemoral joint: a critical review.

    PubMed

    Peters, Abby E; Akhtar, Riaz; Comerford, Eithne J; Bates, Karl T

    2018-01-01

    Understanding how structural and functional alterations of individual tissues impact on whole-joint function is challenging, particularly in humans where direct invasive experimentation is difficult. Finite element (FE) computational models produce quantitative predictions of the mechanical and physiological behaviour of multiple tissues simultaneously, thereby providing a means to study changes that occur through healthy ageing and disease such as osteoarthritis (OA). As a result, significant research investment has been placed in developing such models of the human knee. Previous work has highlighted that model predictions are highly sensitive to the various inputs used to build them, particularly the mathematical definition of material properties of biological tissues. The goal of this systematic review is two-fold. First, we provide a comprehensive summation and evaluation of existing linear elastic material property data for human tibiofemoral joint tissues, tabulating numerical values as a reference resource for future studies. Second, we review efforts to model tibiofemoral joint mechanical behaviour through FE modelling with particular focus on how studies have sourced tissue material properties. The last decade has seen a renaissance in material testing fuelled by development of a variety of new engineering techniques that allow the mechanical behaviour of both soft and hard tissues to be characterised at a spectrum of scales from nano- to bulk tissue level. As a result, there now exists an extremely broad range of published values for human tibiofemoral joint tissues. However, our systematic review highlights gaps and ambiguities that mean quantitative understanding of how tissue material properties alter with age and OA is limited. It is therefore currently challenging to construct FE models of the knee that are truly representative of a specific age or disease-state. Consequently, recent tibiofemoral joint FE models have been highly generic in terms of

  2. Induction of osteoarthritis by injecting monosodium iodoacetate into the patellofemoral joint of an experimental rat model.

    PubMed

    Takahashi, Ikufumi; Matsuzaki, Taro; Kuroki, Hiroshi; Hoso, Masahiro

    2018-01-01

    This study aimed to investigate the histopathological changes in the patellofemoral joint using a rat model of osteoarthritis that was induced using monosodium iodoacetate, and to establish a novel model of patellofemoral osteoarthritis in a rat model using histopathological analysis. Sixty male rats were used. Osteoarthritis was induced through a single intra-articular injection of monosodium iodoacetate in both knee joints. Animals were equally divided into two experimental groups based on the monosodium iodoacetate dose: 0.2 mg and 1.0 mg. Histopathological changes in the articular cartilage of the patellofemoral joint and the infrapatellar fat pad were examined at 3 days, 1 week, 2 weeks, 4 weeks, 8 weeks, and 12 weeks after the monosodium iodoacetate injection. In the 1.0-mg group, the representative histopathological findings of osteoarthritis were observed in the articular cartilage of the patellofemoral joint over time. Additionally, the Osteoarthritis Research Society International scores of the patellofemoral joint increased over time. The synovitis scores of the infrapatellar fat pad in both groups were highest at 3 days, and then the values decreased over time. The fibrosis score of the infrapatellar fat pad in the 1.0-mg group increased with time, whereas the fibrosis score in the 0.2-mg group remained low. Representative histopathological findings of osteoarthritis were observed in the articular cartilage of the patellofemoral joint in a rat model of osteoarthritis induced using monosodium iodoacetate. With appropriate selection, this model may be regarded as an ideal patellofemoral osteoarthritis model.

  3. Factors affecting results of fluoroscopy-guided facet joint injection: Probable differences in the outcome of treatment between pure facet joint hypertrophy and concomitant diseases.

    PubMed

    Albayrak, Akif; Ozkul, Baris; Balioglu, Mehmet Bulent; Atici, Yunus; Gultekin, Muhammet Zeki; Albayrak, Merih Dilan

    2016-01-01

    Retrospective cohort study. Facet joints are considered a common source of chronic low-back pain. To determine whether pathogens related to the facet joint arthritis have any effect on treatment failure. Facet joint injection was applied to 94 patients treated at our hospital between 2011 and 2012 (mean age 59.5 years; 80 women and 14 men). For the purpose of analysis, the patients were divided into two groups. Patients who only had facet hypertrophy were placed in group A (47 patients, 41 women and 6 men, mean age 55.3 years) and patients who had any additional major pathology to facet hypertrophy were placed in group B (47 patients, 39 women and 8 men, mean age 58.9 years). Injections were applied around the facet joint under surgical conditions utilizing fluoroscopy device guidance. A mixture of methylprednisolone and lidocaine was used as the injection ingredient. In terms of Oswestry Disability Index (ODI) and visual analog scale (VAS) scores, no significant difference was found between preinjection and immediate postinjection values in both groups, and the scores of group A patients were significantly lower (P < 0.005) compared with that of group B patients at the end of the third, sixth, and twelfth month. For low-back pain caused by facet hypertrophy, steroid injection around the facet joint is an effective treatment, but if there is an existing major pathology, it is not as effective.

  4. An empirical probability model of detecting species at low densities.

    PubMed

    Delaney, David G; Leung, Brian

    2010-06-01

    False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.

  5. Questioning the Relevance of Model-Based Probability Statements on Extreme Weather and Future Climate

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2007-12-01

    We question the relevance of climate-model based Bayesian (or other) probability statements for decision support and impact assessment on spatial scales less than continental and temporal averages less than seasonal. Scientific assessment of higher resolution space and time scale information is urgently needed, given the commercial availability of "products" at high spatiotemporal resolution, their provision by nationally funded agencies for use both in industry decision making and governmental policy support, and their presentation to the public as matters of fact. Specifically we seek to establish necessary conditions for probability forecasts (projections conditioned on a model structure and a forcing scenario) to be taken seriously as reflecting the probability of future real-world events. We illustrate how risk management can profitably employ imperfect models of complicated chaotic systems, following NASA's study of near-Earth PHOs (Potentially Hazardous Objects). Our climate models will never be perfect, nevertheless the space and time scales on which they provide decision- support relevant information is expected to improve with the models themselves. Our aim is to establish a set of baselines of internal consistency; these are merely necessary conditions (not sufficient conditions) that physics based state-of-the-art models are expected to pass if their output is to be judged decision support relevant. Probabilistic Similarity is proposed as one goal which can be obtained even when our models are not empirically adequate. In short, probabilistic similarity requires that, given inputs similar to today's empirical observations and observational uncertainties, we expect future models to produce similar forecast distributions. Expert opinion on the space and time scales on which we might reasonably expect probabilistic similarity may prove of much greater utility than expert elicitation of uncertainty in parameter values in a model that is not empirically

  6. Constitutive Modeling of a Glass Fiber-Reinforced PTFE Gasketed-Joint Under a Re-torque

    NASA Astrophysics Data System (ADS)

    Williams, James; Gordon, Ali P.

    Joints gasketed with viscoelastic seals often receive an application of a secondary torque, i.e., retorque, in order to ensure joint tightness and proper sealing. The motivation of this study is to characterize and analytically model the load and deflection re-torque response of a single 25% glass-fiber reinforced polytetrafluorethylene (PTFE) gasket-bolted joint with serrated flange detail. The Burger-type viscoelastic modeling constants of the material are obtained through isolating the gasket from the bolt by performing a gasket creep test via a MTS electromechanical test frame. The re-load creep response is also investigated by re-loading the gasket after a period of initial creep to observe the response. The modeling constants obtained from the creep tests are used with a Burger-type viscoelastic model to predict the re-torque response of a single bolt-gasket test fixture in order to validate the ability of the model to simulate the re-torque response under various loading conditions and flange detail.

  7. Knee joint transplantation combined with surgical angiogenesis in rabbits – a new experimental model

    PubMed Central

    Kremer, Thomas; Giusti, Guilherme; Friedrich, Patricia F.; Willems, Wouter; Bishop, Allen T.; Giessler, Goetz A.

    2012-01-01

    Summary Purpose We have previously described a means to maintain bone allotransplant viability, without long-term immune modulation, replacing allogenic bone vasculature with autogenous vessels. A rabbit model for whole knee joint transplantation was developed and tested using the same methodology, initially as an autotransplant. Materials/Methods Eight New Zealand White rabbit knee joints were elevated on a popliteal vessel pedicle to evaluate limb viability in a non-survival study. Ten additional joints were elevated and replaced orthotopically in a fashion identical to allotransplantation, obviating only microsurgical repairs and immunosuppression. A superficial inferior epigastric facial (SIEF) flap and a saphenous arteriovenous (AV) bundle were introduced into the femur and tibia respectively, generating a neoangiogenic bone circulation. In allogenic transplantation, this step maintains viability after cessation of immunosuppression. Sixteen weeks later, x-rays, microangiography, histology, histomorphometry and biomechanical analysis were performed. Results Limb viability was preserved in the initial 8 animals. Both soft tissue and bone healing occurred in 10 orthotopic transplants. Surgical angiogenesis from the SIEF flap and AV bundle was always present. Bone and joint viability was maintained, with demonstrable new bone formation. Bone strength was less than the opposite side. Arthrosis and joint contractures were frequent. Conclusion We have developed a rabbit knee joint model and evaluation methods suitable for subsequent studies of whole joint allotransplantation. PMID:22113889

  8. A statistical model for investigating binding probabilities of DNA nucleotide sequences using microarrays.

    PubMed

    Lee, Mei-Ling Ting; Bulyk, Martha L; Whitmore, G A; Church, George M

    2002-12-01

    There is considerable scientific interest in knowing the probability that a site-specific transcription factor will bind to a given DNA sequence. Microarray methods provide an effective means for assessing the binding affinities of a large number of DNA sequences as demonstrated by Bulyk et al. (2001, Proceedings of the National Academy of Sciences, USA 98, 7158-7163) in their study of the DNA-binding specificities of Zif268 zinc fingers using microarray technology. In a follow-up investigation, Bulyk, Johnson, and Church (2002, Nucleic Acid Research 30, 1255-1261) studied the interdependence of nucleotides on the binding affinities of transcription proteins. Our article is motivated by this pair of studies. We present a general statistical methodology for analyzing microarray intensity measurements reflecting DNA-protein interactions. The log probability of a protein binding to a DNA sequence on an array is modeled using a linear ANOVA model. This model is convenient because it employs familiar statistical concepts and procedures and also because it is effective for investigating the probability structure of the binding mechanism.

  9. Solving probability reasoning based on DNA strand displacement and probability modules.

    PubMed

    Zhang, Qiang; Wang, Xiaobiao; Wang, Xiaojun; Zhou, Changjun

    2017-12-01

    In computation biology, DNA strand displacement technology is used to simulate the computation process and has shown strong computing ability. Most researchers use it to solve logic problems, but it is only rarely used in probabilistic reasoning. To process probabilistic reasoning, a conditional probability derivation model and total probability model based on DNA strand displacement were established in this paper. The models were assessed through the game "read your mind." It has been shown to enable the application of probabilistic reasoning in genetic diagnosis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Modelling the probability of ionospheric irregularity occurrence over African low latitude region

    NASA Astrophysics Data System (ADS)

    Mungufeni, Patrick; Jurua, Edward; Bosco Habarulema, John; Anguma Katrini, Simon

    2015-06-01

    This study presents models of geomagnetically quiet time probability of occurrence of ionospheric irregularities over the African low latitude region. GNSS-derived ionospheric total electron content data from Mbarara, Uganda (0.60°S, 30.74°E, geographic, 10.22°S, magnetic) and Libreville, Gabon (0.35°N, 9.68°E, geographic, 8.05°S, magnetic) during the period 2001-2012 were used. First, we established the rate of change of total electron content index (ROTI) value associated with background ionospheric irregularity over the region. This was done by analysing GNSS carrier-phases at L-band frequencies L1 and L2 with the aim of identifying cycle slip events associated with ionospheric irregularities. We identified at both stations a total of 699 events of cycle slips. The corresponding median ROTI value at the epochs of the cycle slip events was 0.54 TECU/min. The probability of occurrence of ionospheric irregularities associated with ROTI ≥ 0.5 TECU / min was then modelled by fitting cubic B-splines to the data. The aspects the model captured included diurnal, seasonal, and solar flux dependence patterns of the probability of occurrence of ionospheric irregularities. The model developed over Mbarara was validated with data over Mt. Baker, Uganda (0.35°N, 29.90°E, geographic, 9.25°S, magnetic), Kigali, Rwanda (1.94°S, 30.09°E, geographic, 11.62°S, magnetic), and Kampala, Uganda (0.34°N, 32.60°E, geographic, 9.29°S, magnetic). For the period validated at Mt. Baker (approximately, 137.64 km, north west), Kigali (approximately, 162.42 km, south west), and Kampala (approximately, 237.61 km, north east) the percentages of the number of errors (difference between the observed and the modelled probability of occurrence of ionospheric irregularity) less than 0.05 are 97.3, 89.4, and 81.3, respectively.

  11. Prototype integration of the joint munitions assessment and planning model with the OSD threat methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lynn, R.Y.S.; Bolmarcich, J.J.

    The purpose of this Memorandum is to propose a prototype procedure which the Office of Munitions might employ to exercise, in a supportive joint fashion, two of its High Level Conventional Munitions Models, namely, the OSD Threat Methodology and the Joint Munitions Assessment and Planning (JMAP) model. The joint application of JMAP and the OSD Threat Methodology provides a tool to optimize munitions stockpiles. The remainder of this Memorandum comprises five parts. The first is a description of the structure and use of the OSD Threat Methodology. The second is a description of JMAP and its use. The third discussesmore » the concept of the joint application of JMAP and OSD Threat Methodology. The fourth displays sample output of the joint application. The fifth is a summary and epilogue. Finally, three appendices contain details of the formulation, data, and computer code.« less

  12. Human Inferences about Sequences: A Minimal Transition Probability Model

    PubMed Central

    2016-01-01

    The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge. PMID:28030543

  13. Modelling detection probabilities to evaluate management and control tools for an invasive species

    USGS Publications Warehouse

    Christy, M.T.; Yackel Adams, A.A.; Rodda, G.H.; Savidge, J.A.; Tyrrell, C.L.

    2010-01-01

    For most ecologists, detection probability (p) is a nuisance variable that must be modelled to estimate the state variable of interest (i.e. survival, abundance, or occupancy). However, in the realm of invasive species control, the rate of detection and removal is the rate-limiting step for management of this pervasive environmental problem. For strategic planning of an eradication (removal of every individual), one must identify the least likely individual to be removed, and determine the probability of removing it. To evaluate visual searching as a control tool for populations of the invasive brown treesnake Boiga irregularis, we designed a mark-recapture study to evaluate detection probability as a function of time, gender, size, body condition, recent detection history, residency status, searcher team and environmental covariates. We evaluated these factors using 654 captures resulting from visual detections of 117 snakes residing in a 5-ha semi-forested enclosure on Guam, fenced to prevent immigration and emigration of snakes but not their prey. Visual detection probability was low overall (= 0??07 per occasion) but reached 0??18 under optimal circumstances. Our results supported sex-specific differences in detectability that were a quadratic function of size, with both small and large females having lower detection probabilities than males of those sizes. There was strong evidence for individual periodic changes in detectability of a few days duration, roughly doubling detection probability (comparing peak to non-elevated detections). Snakes in poor body condition had estimated mean detection probabilities greater than snakes with high body condition. Search teams with high average detection rates exhibited detection probabilities about twice that of search teams with low average detection rates. Surveys conducted with bright moonlight and strong wind gusts exhibited moderately decreased probabilities of detecting snakes. Synthesis and applications. By

  14. Exact Calculation of the Joint Allele Frequency Spectrum for Isolation with Migration Models.

    PubMed

    Kern, Andrew D; Hey, Jody

    2017-09-01

    Population genomic datasets collected over the past decade have spurred interest in developing methods that can utilize massive numbers of loci for inference of demographic and selective histories of populations. The allele frequency spectrum (AFS) provides a convenient statistic for such analysis, and, accordingly, much attention has been paid to predicting theoretical expectations of the AFS under a number of different models. However, to date, exact solutions for the joint AFS of two or more populations under models of migration and divergence have not been found. Here, we present a novel Markov chain representation of the coalescent on the state space of the joint AFS that allows for rapid, exact calculation of the joint AFS under isolation with migration (IM) models. In turn, we show how our Markov chain method, in the context of composite likelihood estimation, can be used for accurate inference of parameters of the IM model using SNP data. Lastly, we apply our method to recent whole genome datasets from African Drosophila melanogaster . Copyright © 2017 Kern and Hey.

  15. Uncertainty squared: Choosing among multiple input probability distributions and interpreting multiple output probability distributions in Monte Carlo climate risk models

    NASA Astrophysics Data System (ADS)

    Baer, P.; Mastrandrea, M.

    2006-12-01

    Simple probabilistic models which attempt to estimate likely transient temperature change from specified CO2 emissions scenarios must make assumptions about at least six uncertain aspects of the causal chain between emissions and temperature: current radiative forcing (including but not limited to aerosols), current land use emissions, carbon sinks, future non-CO2 forcing, ocean heat uptake, and climate sensitivity. Of these, multiple PDFs (probability density functions) have been published for the climate sensitivity, a couple for current forcing and ocean heat uptake, one for future non-CO2 forcing, and none for current land use emissions or carbon cycle uncertainty (which are interdependent). Different assumptions about these parameters, as well as different model structures, will lead to different estimates of likely temperature increase from the same emissions pathway. Thus policymakers will be faced with a range of temperature probability distributions for the same emissions scenarios, each described by a central tendency and spread. Because our conventional understanding of uncertainty and probability requires that a probabilistically defined variable of interest have only a single mean (or median, or modal) value and a well-defined spread, this "multidimensional" uncertainty defies straightforward utilization in policymaking. We suggest that there are no simple solutions to the questions raised. Crucially, we must dispel the notion that there is a "true" probability probabilities of this type are necessarily subjective, and reasonable people may disagree. Indeed, we suggest that what is at stake is precisely the question, what is it reasonable to believe, and to act as if we believe? As a preliminary suggestion, we demonstrate how the output of a simple probabilistic climate model might be evaluated regarding the reasonableness of the outputs it calculates with different input PDFs. We suggest further that where there is insufficient evidence to clearly

  16. GENERAL A Hierarchy of Compatibility and Comeasurability Levels in Quantum Logics with Unique Conditional Probabilities

    NASA Astrophysics Data System (ADS)

    Gerd, Niestegge

    2010-12-01

    In the quantum mechanical Hilbert space formalism, the probabilistic interpretation is a later ad-hoc add-on, more or less enforced by the experimental evidence, but not motivated by the mathematical model itself. A model involving a clear probabilistic interpretation from the very beginning is provided by the quantum logics with unique conditional probabilities. It includes the projection lattices in von Neumann algebras and here probability conditionalization becomes identical with the state transition of the Lüders-von Neumann measurement process. This motivates the definition of a hierarchy of five compatibility and comeasurability levels in the abstract setting of the quantum logics with unique conditional probabilities. Their meanings are: the absence of quantum interference or influence, the existence of a joint distribution, simultaneous measurability, and the independence of the final state after two successive measurements from the sequential order of these two measurements. A further level means that two elements of the quantum logic (events) belong to the same Boolean subalgebra. In the general case, the five compatibility and comeasurability levels appear to differ, but they all coincide in the common Hilbert space formalism of quantum mechanics, in von Neumann algebras, and in some other cases.

  17. A predictive model to estimate the pretest probability of metastasis in patients with osteosarcoma.

    PubMed

    Wang, Sisheng; Zheng, Shaoluan; Hu, Kongzu; Sun, Heyan; Zhang, Jinling; Rong, Genxiang; Gao, Jie; Ding, Nan; Gui, Binjie

    2017-01-01

    Osteosarcomas (OSs) represent a huge challenge to improve the overall survival, especially in metastatic patients. Increasing evidence indicates that both tumor-associated elements but also on host-associated elements are under a remarkable effect on the prognosis of cancer patients, especially systemic inflammatory response. By analyzing a series prognosis of factors, including age, gender, primary tumor size, tumor location, tumor grade, and histological classification, monocyte ratio, and NLR ratio, a clinical predictive model was established by using stepwise logistic regression involved circulating leukocyte to compute the estimated probabilities of metastases for OS patients. The clinical predictive model was described by the following equations: probability of developing metastases = ex/(1 + ex), x = -2.150 +  (1.680 × monocyte ratio) + (1.533 × NLR ratio), where is the base of the natural logarithm, the assignment to each of the 2 variables is 1 if the ratio >1 (otherwise 0). The calculated AUC of the receiver-operating characteristic curve as 0.793 revealed well accuracy of this model (95% CI, 0.740-0.845). The predicted probabilities that we generated with the cross-validation procedure had a similar AUC (0.743; 95% CI, 0.684-0.803). The present model could be used to improve the outcomes of the metastases by developing a predictive model considering circulating leukocyte influence to estimate the pretest probability of developing metastases in patients with OS.

  18. Induction of osteoarthritis by injecting monosodium iodoacetate into the patellofemoral joint of an experimental rat model

    PubMed Central

    Matsuzaki, Taro; Kuroki, Hiroshi

    2018-01-01

    This study aimed to investigate the histopathological changes in the patellofemoral joint using a rat model of osteoarthritis that was induced using monosodium iodoacetate, and to establish a novel model of patellofemoral osteoarthritis in a rat model using histopathological analysis. Sixty male rats were used. Osteoarthritis was induced through a single intra-articular injection of monosodium iodoacetate in both knee joints. Animals were equally divided into two experimental groups based on the monosodium iodoacetate dose: 0.2 mg and 1.0 mg. Histopathological changes in the articular cartilage of the patellofemoral joint and the infrapatellar fat pad were examined at 3 days, 1 week, 2 weeks, 4 weeks, 8 weeks, and 12 weeks after the monosodium iodoacetate injection. In the 1.0-mg group, the representative histopathological findings of osteoarthritis were observed in the articular cartilage of the patellofemoral joint over time. Additionally, the Osteoarthritis Research Society International scores of the patellofemoral joint increased over time. The synovitis scores of the infrapatellar fat pad in both groups were highest at 3 days, and then the values decreased over time. The fibrosis score of the infrapatellar fat pad in the 1.0-mg group increased with time, whereas the fibrosis score in the 0.2-mg group remained low. Representative histopathological findings of osteoarthritis were observed in the articular cartilage of the patellofemoral joint in a rat model of osteoarthritis induced using monosodium iodoacetate. With appropriate selection, this model may be regarded as an ideal patellofemoral osteoarthritis model. PMID:29698461

  19. Modeling highway travel time distribution with conditional probability models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling

    ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program providesmore » a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).« less

  20. Model-based sensorimotor integration for multi-joint control: development of a virtual arm model.

    PubMed

    Song, D; Lan, N; Loeb, G E; Gordon, J

    2008-06-01

    An integrated, sensorimotor virtual arm (VA) model has been developed and validated for simulation studies of control of human arm movements. Realistic anatomical features of shoulder, elbow and forearm joints were captured with a graphic modeling environment, SIMM. The model included 15 musculotendon elements acting at the shoulder, elbow and forearm. Muscle actions on joints were evaluated by SIMM generated moment arms that were matched to experimentally measured profiles. The Virtual Muscle (VM) model contained appropriate admixture of slow and fast twitch fibers with realistic physiological properties for force production. A realistic spindle model was embedded in each VM with inputs of fascicle length, gamma static (gamma(stat)) and dynamic (gamma(dyn)) controls and outputs of primary (I(a)) and secondary (II) afferents. A piecewise linear model of Golgi Tendon Organ (GTO) represented the ensemble sampling (I(b)) of the total muscle force at the tendon. All model components were integrated into a Simulink block using a special software tool. The complete VA model was validated with open-loop simulation at discrete hand positions within the full range of alpha and gamma drives to extrafusal and intrafusal muscle fibers. The model behaviors were consistent with a wide variety of physiological phenomena. Spindle afferents were effectively modulated by fusimotor drives and hand positions of the arm. These simulations validated the VA model as a computational tool for studying arm movement control. The VA model is available to researchers at website http://pt.usc.edu/cel .

  1. A Proposed Approach for Joint Modeling of the Longitudinal and Time-To-Event Data in Heterogeneous Populations: An Application to HIV/AIDS's Disease.

    PubMed

    Roustaei, Narges; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf

    2018-01-01

    In recent years, the joint models have been widely used for modeling the longitudinal and time-to-event data simultaneously. In this study, we proposed an approach (PA) to study the longitudinal and survival outcomes simultaneously in heterogeneous populations. PA relaxes the assumption of conditional independence (CI). We also compared PA with joint latent class model (JLCM) and separate approach (SA) for various sample sizes (150, 300, and 600) and different association parameters (0, 0.2, and 0.5). The average bias of parameters estimation (AB-PE), average SE of parameters estimation (ASE-PE), and coverage probability of the 95% confidence interval (CP) among the three approaches were compared. In most cases, when the sample sizes increased, AB-PE and ASE-PE decreased for the three approaches, and CP got closer to the nominal level of 0.95. When there was a considerable association, PA in comparison with SA and JLCM performed better in the sense that PA had the smallest AB-PE and ASE-PE for the longitudinal submodel among the three approaches for the small and moderate sample sizes. Moreover, JLCM was desirable for the none-association and the large sample size. Finally, the evaluated approaches were applied on a real HIV/AIDS dataset for validation, and the results were compared.

  2. Datamining approaches for modeling tumor control probability.

    PubMed

    Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D

    2010-11-01

    Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.

  3. Assessment of different models for computing the probability of a clear line of sight

    NASA Astrophysics Data System (ADS)

    Bojin, Sorin; Paulescu, Marius; Badescu, Viorel

    2017-12-01

    This paper is focused on modeling the morphological properties of the cloud fields in terms of the probability of a clear line of sight (PCLOS). PCLOS is defined as the probability that a line of sight between observer and a given point of the celestial vault goes freely without intersecting a cloud. A variety of PCLOS models assuming the cloud shape hemisphere, semi-ellipsoid and ellipsoid are tested. The effective parameters (cloud aspect ratio and absolute cloud fraction) are extracted from high-resolution series of sunshine number measurements. The performance of the PCLOS models is evaluated from the perspective of their ability in retrieving the point cloudiness. The advantages and disadvantages of the tested models are discussed, aiming to a simplified parameterization of PCLOS models.

  4. Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling.

    PubMed

    Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F

    2015-01-01

    Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.

  5. Fishnet model for failure probability tail of nacre-like imbricated lamellar materials

    NASA Astrophysics Data System (ADS)

    Luo, Wen; Bažant, Zdeněk P.

    2017-12-01

    Nacre, the iridescent material of the shells of pearl oysters and abalone, consists mostly of aragonite (a form of CaCO3), a brittle constituent of relatively low strength (≈10 MPa). Yet it has astonishing mean tensile strength (≈150 MPa) and fracture energy (≈350 to 1,240 J/m2). The reasons have recently become well understood: (i) the nanoscale thickness (≈300 nm) of nacre's building blocks, the aragonite lamellae (or platelets), and (ii) the imbricated, or staggered, arrangement of these lamellea, bound by biopolymer layers only ≈25 nm thick, occupying <5% of volume. These properties inspire manmade biomimetic materials. For engineering applications, however, the failure probability of ≤10-6 is generally required. To guarantee it, the type of probability density function (pdf) of strength, including its tail, must be determined. This objective, not pursued previously, is hardly achievable by experiments alone, since >10^8 tests of specimens would be needed. Here we outline a statistical model of strength that resembles a fishnet pulled diagonally, captures the tail of pdf of strength and, importantly, allows analytical safety assessments of nacreous materials. The analysis shows that, in terms of safety, the imbricated lamellar structure provides a major additional advantage—˜10% strength increase at tail failure probability 10^-6 and a 1 to 2 orders of magnitude tail probability decrease at fixed stress. Another advantage is that a high scatter of microstructure properties diminishes the strength difference between the mean and the probability tail, compared with the weakest link model. These advantages of nacre-like materials are here justified analytically and supported by millions of Monte Carlo simulations.

  6. Joint nociceptor nerve activity and pain in an animal model of acute gout and its modulation by intra-articular hyaluronan

    PubMed Central

    Marcotti, Aida; Miralles, Ana; Dominguez, Eduardo; Pascual, Eliseo; Gomis, Ana; Belmonte, Carlos; de la Peña, Elvira

    2018-01-01

    Abstract The mechanisms whereby deposition of monosodium urate (MSU) crystals in gout activates nociceptors to induce joint pain are incompletely understood. We tried to reproduce the signs of painful gouty arthritis, injecting into the knee joint of rats suspensions containing amorphous or triclinic, needle MSU crystals. The magnitude of MSU-induced inflammation and pain behavior signs were correlated with the changes in firing frequency of spontaneous and movement-evoked nerve impulse activity recorded in single knee joint nociceptor saphenous nerve fibers. Joint swelling, mechanical and cold allodynia, and hyperalgesia appeared 3 hours after joint injection of MSU crystals. In parallel, spontaneous and movement-evoked joint nociceptor impulse activity raised significantly. Solutions containing amorphous or needle-shaped MSU crystals had similar inflammatory and electrophysiological effects. Intra-articular injection of hyaluronan (HA, Synvisc), a high-MW glycosaminoglycan present in the synovial fluid with analgesic effects in osteoarthritis, significantly reduced MSU-induced behavioral signs of pain and decreased the enhanced joint nociceptor activity. Our results support the interpretation that pain and nociceptor activation are not triggered by direct mechanical stimulation of nociceptors by MSU crystals, but are primarily caused by the release of excitatory mediators by inflammatory cells activated by MSU crystals. Intra-articular HA decreased behavioral and electrophysiological signs of pain, possibly through its viscoelastic filtering effect on the mechanical forces acting over sensitized joint sensory endings and probably also by a direct interaction of HA molecules with the transducing channels expressed in joint nociceptor terminals. PMID:29319609

  7. An extended car-following model considering the appearing probability of truck and driver's characteristics

    NASA Astrophysics Data System (ADS)

    Rong, Ying; Wen, Huiying

    2018-05-01

    In this paper, the appearing probability of truck is introduced and an extended car-following model is presented to analyze the traffic flow based on the consideration of driver's characteristics, under honk environment. The stability condition of this proposed model is obtained through linear stability analysis. In order to study the evolution properties of traffic wave near the critical point, the mKdV equation is derived by the reductive perturbation method. The results show that the traffic flow will become more disorder for the larger appearing probability of truck. Besides, the appearance of leading truck affects not only the stability of traffic flow, but also the effect of other aspects on traffic flow, such as: driver's reaction and honk effect. The effects of them on traffic flow are closely correlated with the appearing probability of truck. Finally, the numerical simulations under the periodic boundary condition are carried out to verify the proposed model. And they are consistent with the theoretical findings.

  8. Improvement in latent variable indirect response joint modeling of a continuous and a categorical clinical endpoint in rheumatoid arthritis.

    PubMed

    Hu, Chuanpu; Zhou, Honghui

    2016-02-01

    Improving the quality of exposure-response modeling is important in clinical drug development. The general joint modeling of multiple endpoints is made possible in part by recent progress on the latent variable indirect response (IDR) modeling for ordered categorical endpoints. This manuscript aims to investigate, when modeling a continuous and a categorical clinical endpoint, the level of improvement achievable by joint modeling in the latent variable IDR modeling framework through the sharing of model parameters for the individual endpoints, guided by the appropriate representation of drug and placebo mechanism. This was illustrated with data from two phase III clinical trials of intravenously administered mAb X for the treatment of rheumatoid arthritis, with the 28-joint disease activity score (DAS28) and 20, 50, and 70% improvement in the American College of Rheumatology (ACR20, ACR50, and ACR70) disease severity criteria were used as efficacy endpoints. The joint modeling framework led to a parsimonious final model with reasonable performance, evaluated by visual predictive check. The results showed that, compared with the more common approach of separately modeling the endpoints, it is possible for the joint model to be more parsimonious and yet better describe the individual endpoints. In particular, the joint model may better describe one endpoint through subject-specific random effects that would not have been estimable from data of this endpoint alone.

  9. Statistical learning of action: the role of conditional probability.

    PubMed

    Meyer, Meredith; Baldwin, Dare

    2011-12-01

    Identification of distinct units within a continuous flow of human action is fundamental to action processing. Such segmentation may rest in part on statistical learning. In a series of four experiments, we examined what types of statistics people can use to segment a continuous stream involving many brief, goal-directed action elements. The results of Experiment 1 showed no evidence for sensitivity to conditional probability, whereas Experiment 2 displayed learning based on joint probability. In Experiment 3, we demonstrated that additional exposure to the input failed to engender sensitivity to conditional probability. However, the results of Experiment 4 showed that a subset of adults-namely, those more successful at identifying actions that had been seen more frequently than comparison sequences-were also successful at learning conditional-probability statistics. These experiments help to clarify the mechanisms subserving processing of intentional action, and they highlight important differences from, as well as similarities to, prior studies of statistical learning in other domains, including language.

  10. Modeling microstructure of incudostapedial joint and the effect on cochlear input

    NASA Astrophysics Data System (ADS)

    Gan, Rong Z.; Wang, Xuelin

    2015-12-01

    The incudostapedial joint (ISJ) connects the incus to stapes in human ear and plays an important role for sound transmission from the tympanic membrane (TM) to cochlea. ISJ is a synovial joint composed of articular cartilage on the lenticular process and stapes head with the synovial fluid between them. However, there is no study on how the synovial ISJ affects the middle ear and cochlear functions. Recently, we have developed a 3-dimensinal finite element (FE) model of synovial ISJ and connected the model to our comprehensive FE model of the human ear. The motions of TM, stapes footplate, and basilar membrane and the pressures in scala vestibule and scala tympani were derived over frequencies and compared with experimental measurements. Results show that the synovial ISJ affects sound transmission into cochlea and the frequency-dependent viscoelastic behavior of ISJ provides protection for cochlea from high intensity sound.

  11. Modeling summer month hydrological drought probabilities in the United States using antecedent flow conditions

    USGS Publications Warehouse

    Austin, Samuel H.; Nelms, David L.

    2017-01-01

    Climate change raises concern that risks of hydrological drought may be increasing. We estimate hydrological drought probabilities for rivers and streams in the United States (U.S.) using maximum likelihood logistic regression (MLLR). Streamflow data from winter months are used to estimate the chance of hydrological drought during summer months. Daily streamflow data collected from 9,144 stream gages from January 1, 1884 through January 9, 2014 provide hydrological drought streamflow probabilities for July, August, and September as functions of streamflows during October, November, December, January, and February, estimating outcomes 5-11 months ahead of their occurrence. Few drought prediction methods exploit temporal links among streamflows. We find MLLR modeling of drought streamflow probabilities exploits the explanatory power of temporally linked water flows. MLLR models with strong correct classification rates were produced for streams throughout the U.S. One ad hoc test of correct prediction rates of September 2013 hydrological droughts exceeded 90% correct classification. Some of the best-performing models coincide with areas of high concern including the West, the Midwest, Texas, the Southeast, and the Mid-Atlantic. Using hydrological drought MLLR probability estimates in a water management context can inform understanding of drought streamflow conditions, provide warning of future drought conditions, and aid water management decision making.

  12. Naive Probability: A Mental Model Theory of Extensional Reasoning.

    ERIC Educational Resources Information Center

    Johnson-Laird, P. N.; Legrenzi, Paolo; Girotto, Vittorio; Legrenzi, Maria Sonino; Caverni, Jean-Paul

    1999-01-01

    Outlines a theory of naive probability in which individuals who are unfamiliar with the probability calculus can infer the probabilities of events in an "extensional" way. The theory accommodates reasoning based on numerical premises, and explains how naive reasoners can infer posterior probabilities without relying on Bayes's theorem.…

  13. A short walk in quantum probability

    NASA Astrophysics Data System (ADS)

    Hudson, Robin

    2018-04-01

    This is a personal survey of aspects of quantum probability related to the Heisenberg commutation relation for canonical pairs. Using the failure, in general, of non-negativity of the Wigner distribution for canonical pairs to motivate a more satisfactory quantum notion of joint distribution, we visit a central limit theorem for such pairs and a resulting family of quantum planar Brownian motions which deform the classical planar Brownian motion, together with a corresponding family of quantum stochastic areas. This article is part of the themed issue `Hilbert's sixth problem'.

  14. A short walk in quantum probability.

    PubMed

    Hudson, Robin

    2018-04-28

    This is a personal survey of aspects of quantum probability related to the Heisenberg commutation relation for canonical pairs. Using the failure, in general, of non-negativity of the Wigner distribution for canonical pairs to motivate a more satisfactory quantum notion of joint distribution, we visit a central limit theorem for such pairs and a resulting family of quantum planar Brownian motions which deform the classical planar Brownian motion, together with a corresponding family of quantum stochastic areas.This article is part of the themed issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  15. Error Modeling and Experimental Study of a Flexible Joint 6-UPUR Parallel Six-Axis Force Sensor.

    PubMed

    Zhao, Yanzhi; Cao, Yachao; Zhang, Caifeng; Zhang, Dan; Zhang, Jie

    2017-09-29

    By combining a parallel mechanism with integrated flexible joints, a large measurement range and high accuracy sensor is realized. However, the main errors of the sensor involve not only assembly errors, but also deformation errors of its flexible leg. Based on a flexible joint 6-UPUR (a kind of mechanism configuration where U-universal joint, P-prismatic joint, R-revolute joint) parallel six-axis force sensor developed during the prephase, assembly and deformation error modeling and analysis of the resulting sensors with a large measurement range and high accuracy are made in this paper. First, an assembly error model is established based on the imaginary kinematic joint method and the Denavit-Hartenberg (D-H) method. Next, a stiffness model is built to solve the stiffness matrix. The deformation error model of the sensor is obtained. Then, the first order kinematic influence coefficient matrix when the synthetic error is taken into account is solved. Finally, measurement and calibration experiments of the sensor composed of the hardware and software system are performed. Forced deformation of the force-measuring platform is detected by using laser interferometry and analyzed to verify the correctness of the synthetic error model. In addition, the first order kinematic influence coefficient matrix in actual circumstances is calculated. By comparing the condition numbers and square norms of the coefficient matrices, the conclusion is drawn theoretically that it is very important to take into account the synthetic error for design stage of the sensor and helpful to improve performance of the sensor in order to meet needs of actual working environments.

  16. Error Modeling and Experimental Study of a Flexible Joint 6-UPUR Parallel Six-Axis Force Sensor

    PubMed Central

    Zhao, Yanzhi; Cao, Yachao; Zhang, Caifeng; Zhang, Dan; Zhang, Jie

    2017-01-01

    By combining a parallel mechanism with integrated flexible joints, a large measurement range and high accuracy sensor is realized. However, the main errors of the sensor involve not only assembly errors, but also deformation errors of its flexible leg. Based on a flexible joint 6-UPUR (a kind of mechanism configuration where U-universal joint, P-prismatic joint, R-revolute joint) parallel six-axis force sensor developed during the prephase, assembly and deformation error modeling and analysis of the resulting sensors with a large measurement range and high accuracy are made in this paper. First, an assembly error model is established based on the imaginary kinematic joint method and the Denavit-Hartenberg (D-H) method. Next, a stiffness model is built to solve the stiffness matrix. The deformation error model of the sensor is obtained. Then, the first order kinematic influence coefficient matrix when the synthetic error is taken into account is solved. Finally, measurement and calibration experiments of the sensor composed of the hardware and software system are performed. Forced deformation of the force-measuring platform is detected by using laser interferometry and analyzed to verify the correctness of the synthetic error model. In addition, the first order kinematic influence coefficient matrix in actual circumstances is calculated. By comparing the condition numbers and square norms of the coefficient matrices, the conclusion is drawn theoretically that it is very important to take into account the synthetic error for design stage of the sensor and helpful to improve performance of the sensor in order to meet needs of actual working environments. PMID:28961209

  17. Neural mechanisms and models underlying joint action.

    PubMed

    Chersi, Fabian

    2011-06-01

    Humans, in particular, and to a lesser extent also other species of animals, possess the impressive capability of smoothly coordinating their actions with those of others. The great amount of work done in recent years in neuroscience has provided new insights into the processes involved in joint action, intention understanding, and task sharing. In particular, the discovery of mirror neurons, which fire both when animals execute actions and when they observe the same actions done by other individuals, has shed light on the intimate relationship between perception and action elucidating the direct contribution of motor knowledge to action understanding. Up to date, however, a detailed description of the neural processes involved in these phenomena is still mostly lacking. Building upon data from single neuron recordings in monkeys observing the actions of a demonstrator and then executing the same or a complementary action, this paper describes the functioning of a biologically constraint neural network model of the motor and mirror systems during joint action. In this model, motor sequences are encoded as independent neuronal chains that represent concatenations of elementary motor acts leading to a specific goal. Action execution and recognition are achieved through the propagation of activity within specific chains. Due to the dual property of mirror neurons, the same architecture is capable of smoothly integrating and switching between observed and self-generated action sequences, thus allowing to evaluate multiple hypotheses simultaneously, understand actions done by others, and to respond in an appropriate way.

  18. Numerical Simulation of Tension Properties for Al-Cu Alloy Friction Stir-Welded Joints with GTN Damage Model

    NASA Astrophysics Data System (ADS)

    Sun, Guo-Qin; Sun, Feng-Yang; Cao, Fang-Li; Chen, Shu-Jun; Barkey, Mark E.

    2015-11-01

    The numerical simulation of tensile fracture behavior on Al-Cu alloy friction stir-welded joint was performed with the Gurson-Tvergaard-Needleman (GTN) damage model. The parameters of the GTN model were studied in each region of the friction stir-welded joint by means of inverse identification. Based on the obtained parameters, the finite element model of the welded joint was built to predict the fracture behavior and tension properties. Good agreement can be found between the numerical and experimental results in the location of the tensile fracture and the mechanical properties.

  19. Statistic inversion of multi-zone transition probability models for aquifer characterization in alluvial fans

    DOE PAGES

    Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...

    2015-06-12

    Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less

  20. A concise evidence-based physical examination for diagnosis of acromioclavicular joint pathology: a systematic review.

    PubMed

    Krill, Michael K; Rosas, Samuel; Kwon, KiHyun; Dakkak, Andrew; Nwachukwu, Benedict U; McCormick, Frank

    2018-02-01

    The clinical examination of the shoulder joint is an undervalued diagnostic tool for evaluating acromioclavicular (AC) joint pathology. Applying evidence-based clinical tests enables providers to make an accurate diagnosis and minimize costly imaging procedures and potential delays in care. The purpose of this study was to create a decision tree analysis enabling simple and accurate diagnosis of AC joint pathology. A systematic review of the Medline, Ovid and Cochrane Review databases was performed to identify level one and two diagnostic studies evaluating clinical tests for AC joint pathology. Individual test characteristics were combined in series and in parallel to improve sensitivities and specificities. A secondary analysis utilized subjective pre-test probabilities to create a clinical decision tree algorithm with post-test probabilities. The optimal special test combination to screen and confirm AC joint pathology combined Paxinos sign and O'Brien's Test, with a specificity of 95.8% when performed in series; whereas, Paxinos sign and Hawkins-Kennedy Test demonstrated a sensitivity of 93.7% when performed in parallel. Paxinos sign and O'Brien's Test demonstrated the greatest positive likelihood ratio (2.71); whereas, Paxinos sign and Hawkins-Kennedy Test reported the lowest negative likelihood ratio (0.35). No combination of special tests performed in series or in parallel creates more than a small impact on post-test probabilities to screen or confirm AC joint pathology. Paxinos sign and O'Brien's Test is the only special test combination that has a small and sometimes important impact when used both in series and in parallel. Physical examination testing is not beneficial for diagnosis of AC joint pathology when pretest probability is unequivocal. In these instances, it is of benefit to proceed with procedural tests to evaluate AC joint pathology. Ultrasound-guided corticosteroid injections are diagnostic and therapeutic. An ultrasound-guided AC joint

  1. Repositioning the knee joint in human body FE models using a graphics-based technique.

    PubMed

    Jani, Dhaval; Chawla, Anoop; Mukherjee, Sudipto; Goyal, Rahul; Vusirikala, Nataraju; Jayaraman, Suresh

    2012-01-01

    Human body finite element models (FE-HBMs) are available in standard occupant or pedestrian postures. There is a need to have FE-HBMs in the same posture as a crash victim or to be configured in varying postures. Developing FE models for all possible positions is not practically viable. The current work aims at obtaining a posture-specific human lower extremity model by reconfiguring an existing one. A graphics-based technique was developed to reposition the lower extremity of an FE-HBM by specifying the flexion-extension angle. Elements of the model were segregated into rigid (bones) and deformable components (soft tissues). The bones were rotated about the flexion-extension axis followed by rotation about the longitudinal axis to capture the twisting of the tibia. The desired knee joint movement was thus achieved. Geometric heuristics were then used to reposition the skin. A mapping defined over the space between bones and the skin was used to regenerate the soft tissues. Mesh smoothing was then done to augment mesh quality. The developed method permits control over the kinematics of the joint and maintains the initial mesh quality of the model. For some critical areas (in the joint vicinity) where element distortion is large, mesh smoothing is done to improve mesh quality. A method to reposition the knee joint of a human body FE model was developed. Repositions of a model from 9 degrees of flexion to 90 degrees of flexion in just a few seconds without subjective interventions was demonstrated. Because the mesh quality of the repositioned model was maintained to a predefined level (typically to the level of a well-made model in the initial configuration), the model was suitable for subsequent simulations.

  2. Survival modeling for the estimation of transition probabilities in model-based economic evaluations in the absence of individual patient data: a tutorial.

    PubMed

    Diaby, Vakaramoko; Adunlin, Georges; Montero, Alberto J

    2014-02-01

    Survival modeling techniques are increasingly being used as part of decision modeling for health economic evaluations. As many models are available, it is imperative for interested readers to know about the steps in selecting and using the most suitable ones. The objective of this paper is to propose a tutorial for the application of appropriate survival modeling techniques to estimate transition probabilities, for use in model-based economic evaluations, in the absence of individual patient data (IPD). An illustration of the use of the tutorial is provided based on the final progression-free survival (PFS) analysis of the BOLERO-2 trial in metastatic breast cancer (mBC). An algorithm was adopted from Guyot and colleagues, and was then run in the statistical package R to reconstruct IPD, based on the final PFS analysis of the BOLERO-2 trial. It should be emphasized that the reconstructed IPD represent an approximation of the original data. Afterwards, we fitted parametric models to the reconstructed IPD in the statistical package Stata. Both statistical and graphical tests were conducted to verify the relative and absolute validity of the findings. Finally, the equations for transition probabilities were derived using the general equation for transition probabilities used in model-based economic evaluations, and the parameters were estimated from fitted distributions. The results of the application of the tutorial suggest that the log-logistic model best fits the reconstructed data from the latest published Kaplan-Meier (KM) curves of the BOLERO-2 trial. Results from the regression analyses were confirmed graphically. An equation for transition probabilities was obtained for each arm of the BOLERO-2 trial. In this paper, a tutorial was proposed and used to estimate the transition probabilities for model-based economic evaluation, based on the results of the final PFS analysis of the BOLERO-2 trial in mBC. The results of our study can serve as a basis for any model

  3. A computational framework for simultaneous estimation of muscle and joint contact forces and body motion using optimization and surrogate modeling.

    PubMed

    Eskinazi, Ilan; Fregly, Benjamin J

    2018-04-01

    Concurrent estimation of muscle activations, joint contact forces, and joint kinematics by means of gradient-based optimization of musculoskeletal models is hindered by computationally expensive and non-smooth joint contact and muscle wrapping algorithms. We present a framework that simultaneously speeds up computation and removes sources of non-smoothness from muscle force optimizations using a combination of parallelization and surrogate modeling, with special emphasis on a novel method for modeling joint contact as a surrogate model of a static analysis. The approach allows one to efficiently introduce elastic joint contact models within static and dynamic optimizations of human motion. We demonstrate the approach by performing two optimizations, one static and one dynamic, using a pelvis-leg musculoskeletal model undergoing a gait cycle. We observed convergence on the order of seconds for a static optimization time frame and on the order of minutes for an entire dynamic optimization. The presented framework may facilitate model-based efforts to predict how planned surgical or rehabilitation interventions will affect post-treatment joint and muscle function. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. A Minimum Assumption Tornado-Hazard Probability Model.

    NASA Astrophysics Data System (ADS)

    Schaefer, Joseph T.; Kelly, Donald L.; Abbey, Robert F.

    1986-12-01

    One of the principle applications of climatological tornado data is in tornado-hazard assessment. To perform such a hazard-potential determination, historical tornado characteristics in either a regional or tom area are complied. A model is then used to determine a site-specific point probability of a tornado greater than a specified intensity occurring. Various models require different climatological input. However, a knowledge of the mean values of tornado track width, tornado track width, tornado affected area and tornado occurrence rate as both a function of tornado intensity and geographic area, along with a violence frequency distribution, enable Mod of the models to be applied.The NSSFC-NRC tornado data base is used to supply input for the determination of these parameters over the United States. This climatic data base has undergone extensive updating and quality control since it was last reported. For track parameters, internally redundant data were used to cheek consistency. Further, reports which derivated significantly from the mean wore individually checked. Intensity data have been compared with the University of Chicago DAPPLE tornado base. All tornadoes whose recorded intensifies differed by more than one category were reclassified by an independent scientist so that the two data sets are consistent.

  5. Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferson, Scott; Nelsen, Roger B.; Hajagos, Janos

    2015-05-01

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  6. A quantum probability framework for human probabilistic inference.

    PubMed

    Trueblood, Jennifer S; Yearsley, James M; Pothos, Emmanuel M

    2017-09-01

    There is considerable variety in human inference (e.g., a doctor inferring the presence of a disease, a juror inferring the guilt of a defendant, or someone inferring future weight loss based on diet and exercise). As such, people display a wide range of behaviors when making inference judgments. Sometimes, people's judgments appear Bayesian (i.e., normative), but in other cases, judgments deviate from the normative prescription of classical probability theory. How can we combine both Bayesian and non-Bayesian influences in a principled way? We propose a unified explanation of human inference using quantum probability theory. In our approach, we postulate a hierarchy of mental representations, from 'fully' quantum to 'fully' classical, which could be adopted in different situations. In our hierarchy of models, moving from the lowest level to the highest involves changing assumptions about compatibility (i.e., how joint events are represented). Using results from 3 experiments, we show that our modeling approach explains 5 key phenomena in human inference including order effects, reciprocity (i.e., the inverse fallacy), memorylessness, violations of the Markov condition, and antidiscounting. As far as we are aware, no existing theory or model can explain all 5 phenomena. We also explore transitions in our hierarchy, examining how representations change from more quantum to more classical. We show that classical representations provide a better account of data as individuals gain familiarity with a task. We also show that representations vary between individuals, in a way that relates to a simple measure of cognitive style, the Cognitive Reflection Test. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Salient in space, salient in time: Fixation probability predicts fixation duration during natural scene viewing.

    PubMed

    Einhäuser, Wolfgang; Nuthmann, Antje

    2016-09-01

    During natural scene viewing, humans typically attend and fixate selected locations for about 200-400 ms. Two variables characterize such "overt" attention: the probability of a location being fixated, and the fixation's duration. Both variables have been widely researched, but little is known about their relation. We use a two-step approach to investigate the relation between fixation probability and duration. In the first step, we use a large corpus of fixation data. We demonstrate that fixation probability (empirical salience) predicts fixation duration across different observers and tasks. Linear mixed-effects modeling shows that this relation is explained neither by joint dependencies on simple image features (luminance, contrast, edge density) nor by spatial biases (central bias). In the second step, we experimentally manipulate some of these features. We find that fixation probability from the corpus data still predicts fixation duration for this new set of experimental data. This holds even if stimuli are deprived of low-level images features, as long as higher level scene structure remains intact. Together, this shows a robust relation between fixation duration and probability, which does not depend on simple image features. Moreover, the study exemplifies the combination of empirical research on a large corpus of data with targeted experimental manipulations.

  8. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  9. A Patient-Specific Foot Model for the Estimate of Ankle Joint Forces in Patients with Juvenile Idiopathic Arthritis.

    PubMed

    Prinold, Joe A I; Mazzà, Claudia; Di Marco, Roberto; Hannah, Iain; Malattia, Clara; Magni-Manzoni, Silvia; Petrarca, Maurizio; Ronchetti, Anna B; Tanturri de Horatio, Laura; van Dijkhuizen, E H Pieter; Wesarg, Stefan; Viceconti, Marco

    2016-01-01

    Juvenile idiopathic arthritis (JIA) is the leading cause of childhood disability from a musculoskeletal disorder. It generally affects large joints such as the knee and the ankle, often causing structural damage. Different factors contribute to the damage onset, including altered joint loading and other mechanical factors, associated with pain and inflammation. The prediction of patients' joint loading can hence be a valuable tool in understanding the disease mechanisms involved in structural damage progression. A number of lower-limb musculoskeletal models have been proposed to analyse the hip and knee joints, but juvenile models of the foot are still lacking. This paper presents a modelling pipeline that allows the creation of juvenile patient-specific models starting from lower limb kinematics and foot and ankle MRI data. This pipeline has been applied to data from three children with JIA and the importance of patient-specific parameters and modelling assumptions has been tested in a sensitivity analysis focused on the variation of the joint reaction forces. This analysis highlighted the criticality of patient-specific definition of the ankle joint axes and location of the Achilles tendon insertions. Patient-specific detection of the Tibialis Anterior, Tibialis Posterior, and Peroneus Longus origins and insertions were also shown to be important.

  10. Robust mislabel logistic regression without modeling mislabel probabilities.

    PubMed

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  11. Time- and temperature-dependent failures of a bonded joint

    NASA Astrophysics Data System (ADS)

    Sihn, Sangwook

    -temperature equivalence and the residual strength degradation model enables us to estimate the fatigue life of the bonded joint at different load levels, frequencies and temperatures with a certain probability. A numerical example shows how to apply the life estimation method to a structure subjected to a random load history by rainflow cycle counting.

  12. Improving the accurate assessment of a layered shear-wave velocity model using joint inversion of the effective Rayleigh wave and Love wave dispersion curves

    NASA Astrophysics Data System (ADS)

    Yin, X.; Xia, J.; Xu, H.

    2016-12-01

    Rayleigh and Love waves are two types of surface waves that travel along a free surface.Based on the assumption of horizontal layered homogenous media, Rayleigh-wave phase velocity can be defined as a function of frequency and four groups of earth parameters: P-wave velocity, SV-wave velocity, density and thickness of each layer. Unlike Rayleigh waves, Love-wave phase velocities of a layered homogenous earth model could be calculated using frequency and three groups of earth properties: SH-wave velocity, density, and thickness of each layer. Because the dispersion of Love waves is independent of P-wave velocities, Love-wave dispersion curves are much simpler than Rayleigh wave. The research of joint inversion methods of Rayleigh and Love dispersion curves is necessary. (1) This dissertation adopts the combinations of theoretical analysis and practical applications. In both lateral homogenous media and radial anisotropic media, joint inversion approaches of Rayleigh and Love waves are proposed to improve the accuracy of S-wave velocities.A 10% random white noise and a 20% random white noise are added to the synthetic dispersion curves to check out anti-noise ability of the proposed joint inversion method.Considering the influences of the anomalous layer, Rayleigh and Love waves are insensitive to those layers beneath the high-velocity layer or low-velocity layer and the high-velocity layer itself. Low sensitivities will give rise to high degree of uncertainties of the inverted S-wave velocities of these layers. Considering that sensitivity peaks of Rayleigh and Love waves separate at different frequency ranges, the theoretical analyses have demonstrated that joint inversion of these two types of waves would probably ameliorate the inverted model.The lack of surface-wave (Rayleigh or Love waves) dispersion data may lead to inaccuracy S-wave velocities through the single inversion of Rayleigh or Love waves, so this dissertation presents the joint inversion method of

  13. A generic probability based model to derive regional patterns of crops in time and space

    NASA Astrophysics Data System (ADS)

    Wattenbach, Martin; Luedtke, Stefan; Redweik, Richard; van Oijen, Marcel; Balkovic, Juraj; Reinds, Gert Jan

    2015-04-01

    Croplands are not only the key to human food supply, they also change the biophysical and biogeochemical properties of the land surface leading to changes in the water cycle, energy portioning, they influence soil erosion and substantially contribute to the amount of greenhouse gases entering the atmosphere. The effects of croplands on the environment depend on the type of crop and the associated management which both are related to the site conditions, economic boundary settings as well as preferences of individual farmers. The method described here is designed to predict the most probable crop to appear at a given location and time. The method uses statistical crop area information on NUTS2 level from EUROSTAT and the Common Agricultural Policy Regionalized Impact Model (CAPRI) as observation. These crops are then spatially disaggregated to the 1 x 1 km grid scale within the region, using the assumption that the probability of a crop appearing at a given location and a given year depends on a) the suitability of the land for the cultivation of the crop derived from the MARS Crop Yield Forecast System (MCYFS) and b) expert knowledge of agricultural practices. The latter includes knowledge concerning the feasibility of one crop following another (e.g. a late-maturing crop might leave too little time for the establishment of a winter cereal crop) and the need to combat weed infestations or crop diseases. The model is implemented in R and PostGIS. The quality of the generated crop sequences per grid cell is evaluated on the basis of the given statistics reported by the joint EU/CAPRI database. The assessment is given on NUTS2 level using per cent bias as a measure with a threshold of 15% as minimum quality. The results clearly indicates that crops with a large relative share within the administrative unit are not as error prone as crops that allocate only minor parts of the unit. However, still roughly 40% show an absolute per cent bias above the 15% threshold. This

  14. Intra-Articular Knee Contact Force Estimation During Walking Using Force-Reaction Elements and Subject-Specific Joint Model.

    PubMed

    Jung, Yihwan; Phan, Cong-Bo; Koo, Seungbum

    2016-02-01

    Joint contact forces measured with instrumented knee implants have not only revealed general patterns of joint loading but also showed individual variations that could be due to differences in anatomy and joint kinematics. Musculoskeletal human models for dynamic simulation have been utilized to understand body kinetics including joint moments, muscle tension, and knee contact forces. The objectives of this study were to develop a knee contact model which can predict knee contact forces using an inverse dynamics-based optimization solver and to investigate the effect of joint constraints on knee contact force prediction. A knee contact model was developed to include 32 reaction force elements on the surface of a tibial insert of a total knee replacement (TKR), which was embedded in a full-body musculoskeletal model. Various external measurements including motion data and external force data during walking trials of a subject with an instrumented knee implant were provided from the Sixth Grand Challenge Competition to Predict in vivo Knee Loads. Knee contact forces in the medial and lateral portions of the instrumented knee implant were also provided for the same walking trials. A knee contact model with a hinge joint and normal alignment could predict knee contact forces with root mean square errors (RMSEs) of 165 N and 288 N for the medial and lateral portions of the knee, respectively, and coefficients of determination (R2) of 0.70 and -0.63. When the degrees-of-freedom (DOF) of the knee and locations of leg markers were adjusted to account for the valgus lower-limb alignment of the subject, RMSE values improved to 144 N and 179 N, and R2 values improved to 0.77 and 0.37, respectively. The proposed knee contact model with subject-specific joint model could predict in vivo knee contact forces with reasonable accuracy. This model may contribute to the development and improvement of knee arthroplasty.

  15. Evaluation of an intact, an ACL-deficient, and a reconstructed human knee joint finite element model.

    PubMed

    Vairis, Achilles; Stefanoudakis, George; Petousis, Markos; Vidakis, Nectarios; Tsainis, Andreas-Marios; Kandyla, Betina

    2016-02-01

    The human knee joint has a three-dimensional geometry with multiple body articulations that produce complex mechanical responses under loads that occur in everyday life and sports activities. Understanding the complex mechanical interactions of these load-bearing structures is of use when the treatment of relevant diseases is evaluated and assisting devices are designed. The anterior cruciate ligament (ACL) in the knee is one of four main ligaments that connects the femur to the tibia and is often torn during sudden twisting motions, resulting in knee instability. The objective of this work is to study the mechanical behavior of the human knee joint and evaluate the differences in its response for three different states, i.e., intact, ACL-deficient, and surgically treated (reconstructed) knee. The finite element models corresponding to these states were developed. For the reconstructed model, a novel repair device was developed and patented by the author in previous work. Static load cases were applied, as have already been presented in a previous work, in order to compare the calculated results produced by the two models the ACL-deficient and the surgically reconstructed knee joint, under the exact same loading conditions. Displacements were calculated in different directions for the load cases studied and were found to be very close to those from previous modeling work and were in good agreement with experimental data presented in literature. The developed finite element model for both the intact and the ACL-deficient human knee joint is a reliable tool to study the kinematics of the human knee, as results of this study show. In addition, the reconstructed human knee joint model had kinematic behavior similar to the intact knee joint, showing that such reconstruction devices can restore human knee stability to an adequate extent.

  16. Three-dimensional temporomandibular joint modeling and animation.

    PubMed

    Cascone, Piero; Rinaldi, Fabrizio; Pagnoni, Mario; Marianetti, Tito Matteo; Tedaldi, Massimiliano

    2008-11-01

    The three-dimensional (3D) temporomandibular joint (TMJ) model derives from a study of the cranium by 3D virtual reality and mandibular function animation. The starting point of the project is high-fidelity digital acquisition of a human dry skull. The cooperation between the maxillofacial surgeon and the cartoonist enables the reconstruction of the fibroconnective components of the TMJ that are the keystone for comprehension of the anatomic and functional features of the mandible. The skeletal model is customized with the apposition of the temporomandibular ligament, the articular disk, the retrodiskal tissue, and the medial and the lateral ligament of the disk. The simulation of TMJ movement is the result of the integration of up-to-date data on the biomechanical restrictions. The 3D TMJ model is an easy-to-use application that may be run on a personal computer for the study of the TMJ and its biomechanics.

  17. The validation of a human force model to predict dynamic forces resulting from multi-joint motions

    NASA Technical Reports Server (NTRS)

    Pandya, Abhilash K.; Maida, James C.; Aldridge, Ann M.; Hasson, Scott M.; Woolford, Barbara J.

    1992-01-01

    The development and validation is examined of a dynamic strength model for humans. This model is based on empirical data. The shoulder, elbow, and wrist joints were characterized in terms of maximum isolated torque, or position and velocity, in all rotational planes. This data was reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining torque as a function of position and velocity. The isolated joint torque equations were then used to compute forces resulting from a composite motion, in this case, a ratchet wrench push and pull operation. A comparison of the predicted results of the model with the actual measured values for the composite motion indicates that forces derived from a composite motion of joints (ratcheting) can be predicted from isolated joint measures. Calculated T values comparing model versus measured values for 14 subjects were well within the statistically acceptable limits and regression analysis revealed coefficient of variation between actual and measured to be within 0.72 and 0.80.

  18. To what extent is joint and muscle mechanics predicted by musculoskeletal models sensitive to soft tissue artefacts?

    PubMed

    Lamberto, Giuliano; Martelli, Saulo; Cappozzo, Aurelio; Mazzà, Claudia

    2017-09-06

    Musculoskeletal models are widely used to estimate joint kinematics, intersegmental loads, and muscle and joint contact forces during movement. These estimates can be heavily affected by the soft tissue artefact (STA) when input positional data are obtained using stereophotogrammetry, but this aspect has not yet been fully characterised for muscle and joint forces. This study aims to assess the sensitivity to the STA of three open-source musculoskeletal models, implemented in OpenSim. A baseline dataset of marker trajectories was created for each model from experimental data of one healthy volunteer. Five hundred STA realizations were then statistically generated using a marker-dependent model of the pelvis and lower limb artefact and added to the baseline data. The STA׳s impact on the musculoskeletal model estimates was finally quantified using a Monte Carlo analysis. The modelled STA distributions were in line with the literature. Observed output variations were comparable across the three models, and sensitivity to the STA was evident for most investigated quantities. Shape, magnitude and timing of the joint angle and moment time histories were not significantly affected throughout the entire gait cycle, whereas magnitude variations were observed for muscle and joint forces. Ranges of contact force variations differed between joints, with hip variations up to 1.8 times body weight observed. Variations of more than 30% were observed for some of the muscle forces. In conclusion, musculoskeletal simulations using stereophotogrammetry may be safely run when only interested in overall output patterns. Caution should be paid when more accurate estimated values are needed. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Mixed-effects location and scale Tobit joint models for heterogeneous longitudinal data with skewness, detection limits, and measurement errors.

    PubMed

    Lu, Tao

    2017-01-01

    The joint modeling of mean and variance for longitudinal data is an active research area. This type of model has the advantage of accounting for heteroscedasticity commonly observed in between and within subject variations. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a mixed-effects location scale joint model that concurrently accounts for longitudinal data with multiple features. Specifically, our joint model handles heterogeneity, skewness, limit of detection, measurement errors in covariates which are typically observed in the collection of longitudinal data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. Simulation studies are performed to assess the performance of the proposed method. Alternative models under different conditions are compared.

  20. Joint Denoising/Compression of Image Contours via Shape Prior and Context Tree

    NASA Astrophysics Data System (ADS)

    Zheng, Amin; Cheung, Gene; Florencio, Dinei

    2018-07-01

    With the advent of depth sensing technologies, the extraction of object contours in images---a common and important pre-processing step for later higher-level computer vision tasks like object detection and human action recognition---has become easier. However, acquisition noise in captured depth images means that detected contours suffer from unavoidable errors. In this paper, we propose to jointly denoise and compress detected contours in an image for bandwidth-constrained transmission to a client, who can then carry out aforementioned application-specific tasks using the decoded contours as input. We first prove theoretically that in general a joint denoising / compression approach can outperform a separate two-stage approach that first denoises then encodes contours lossily. Adopting a joint approach, we first propose a burst error model that models typical errors encountered in an observed string y of directional edges. We then formulate a rate-constrained maximum a posteriori (MAP) problem that trades off the posterior probability p(x'|y) of an estimated string x' given y with its code rate R(x'). We design a dynamic programming (DP) algorithm that solves the posed problem optimally, and propose a compact context representation called total suffix tree (TST) that can reduce complexity of the algorithm dramatically. Experimental results show that our joint denoising / compression scheme outperformed a competing separate scheme in rate-distortion performance noticeably.

  1. N -tag probability law of the symmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Poncet, Alexis; Bénichou, Olivier; Démery, Vincent; Oshanin, Gleb

    2018-06-01

    The symmetric exclusion process (SEP), in which particles hop symmetrically on a discrete line with hard-core constraints, is a paradigmatic model of subdiffusion in confined systems. This anomalous behavior is a direct consequence of strong spatial correlations induced by the requirement that the particles cannot overtake each other. Even if this fact has been recognized qualitatively for a long time, up to now there has been no full quantitative determination of these correlations. Here we study the joint probability distribution of an arbitrary number of tagged particles in the SEP. We determine analytically its large-time limit for an arbitrary density of particles, and its full dynamics in the high-density limit. In this limit, we obtain the time-dependent large deviation function of the problem and unveil a universal scaling form shared by the cumulants.

  2. Biomechanics of an orthosis-managed cranial cruciate ligament-deficient canine stifle joint predicted by use of a computer model.

    PubMed

    Bertocci, Gina E; Brown, Nathan P; Mich, Patrice M

    2017-01-01

    OBJECTIVE To evaluate effects of an orthosis on biomechanics of a cranial cruciate ligament (CrCL)-deficient canine stifle joint by use of a 3-D quasistatic rigid-body pelvic limb computer model simulating the stance phase of gait and to investigate influences of orthosis hinge stiffness (durometer). SAMPLE A previously developed computer simulation model for a healthy 33-kg 5-year-old neutered Golden Retriever. PROCEDURES A custom stifle joint orthosis was implemented in the CrCL-deficient pelvic limb computer simulation model. Ligament loads, relative tibial translation, and relative tibial rotation in the orthosis-stabilized stifle joint (baseline scenario; high-durometer hinge]) were determined and compared with values for CrCL-intact and CrCL-deficient stifle joints. Sensitivity analysis was conducted to evaluate the influence of orthosis hinge stiffness on model outcome measures. RESULTS The orthosis decreased loads placed on the caudal cruciate and lateral collateral ligaments and increased load placed on the medial collateral ligament, compared with loads for the CrCL-intact stifle joint. Ligament loads were decreased in the orthosis-managed CrCL-deficient stifle joint, compared with loads for the CrCL-deficient stifle joint. Relative tibial translation and rotation decreased but were not eliminated after orthosis management. Increased orthosis hinge stiffness reduced tibial translation and rotation, whereas decreased hinge stiffness increased internal tibial rotation, compared with values for the baseline scenario. CONCLUSIONS AND CLINICAL RELEVANCE Stifle joint biomechanics were improved following orthosis implementation, compared with biomechanics of the CrCL-deficient stifle joint. Orthosis hinge stiffness influenced stifle joint biomechanics. An orthosis may be a viable option to stabilize a CrCL-deficient canine stifle joint.

  3. Joint Inversion Modelling of Geophysical Data From Lough Neagh Basin

    NASA Astrophysics Data System (ADS)

    Vozar, J.; Moorkamp, M.; Jones, A. G.; Rath, V.; Muller, M. R.

    2015-12-01

    Multi-dimensional modelling of geophysical data collected in the Lough Neagh Basin is presented in the frame of the IRETHERM project. The Permo-Triassic Lough Neagh Basin, situated in the southeastern part of Northern Ireland, exhibits elevated geothermal gradient (~30 °C/km) in the exploratory drilled boreholes. This is taken to indicate good geothermal exploitation potential in the Sherwood Sandstone aquifer for heating, and possibly even electricity production, purposes. We have used a 3-D joint inversion framework for modelling the magnetotelluric (MT) and gravity data collected to the north of the Lough Neagh to derive robust subsurface geological models. Comprehensive supporting geophysical and geological data (e.g. borehole logs and reflection seismic images) have been used in order to analyze and model the MT and gravity data. The geophysical data sets were provided by the Geological Survey of Northern Ireland (GSNI). Considering correct objective function weighting in favor of noise-free MT response functions is particularly important in joint inversion. There is no simple way how to correct distortion effects the 3-D responses as can be done in 1-D or 2-D case. We have used the Tellus Project airborne EM data to constrain magnetotelluric data and correct them for near surface effects. The shallow models from airborne data are used to constrain the uppermost part of 3-D inversion model. Preliminary 3-D joint inversion modeling reveals that the Sherwood Sandstone Group and the Permian Sandstone Formation are imaged as a conductive zone at the depth range of 500 m to 2000 m with laterally varying thickness, depth, and conductance. The conductive target sediments become shallower and thinner to the north and they are laterally continuous. To obtain better characterization of thermal transport properties of investigated area we used porosity and resistivity data from the Annaghmore and Ballymacilroy boreholes to estimate the relations between porosity

  4. Modeling Stochastic Complexity in Complex Adaptive Systems: Non-Kolmogorov Probability and the Process Algebra Approach.

    PubMed

    Sulis, William H

    2017-10-01

    Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.

  5. An empirical probability density distribution of planetary ionosphere storms with geomagnetic precursors

    NASA Astrophysics Data System (ADS)

    Gulyaeva, Tamara; Stanislawska, Iwona; Arikan, Feza; Arikan, Orhan

    The probability of occurrence of the positive and negative planetary ionosphere storms is evaluated using the W index maps produced from Global Ionospheric Maps of Total Electron Content, GIM-TEC, provided by Jet Propulsion Laboratory, and transformed from geographic coordinates to magnetic coordinates frame. The auroral electrojet AE index and the equatorial disturbance storm time Dst index are investigated as precursors of the global ionosphere storm. The superposed epoch analysis is performed for 77 intense storms (Dst≤-100 nT) and 227 moderate storms (-100modeling the average storm profiles for AE and Dst indices, the positive storm probability per map, pW+, and negative storm probability pW- with model parameters determined using Particle Swarm Optimization routine with the best fitting to the data in the least squares sense. The normalized cross-correlation function is used to define lag (time delay) between the probability of positive phase pW+ (W = 3 and 4) and negative phase pW- (W = -3 and -4) of ionosphere storm, versus AE index and Dst index. It is found that AE index better suits to serve as a precursor of the ionosphere storm than Dst index with onset of the average auroral AE storm occurring 6 h before the equatorial Dst storm onset for intense storms and 3 h in advance of moderate Dst storm. The similar space zones advancement of the ionosphere storm is observed with W index (pW+ and pW-) depicting maximum localized in the polar magnetic zone and minimum at magnetic equator. An empirical relation for pW+ and pW- with AE storm precursor is derived which enables the probability of occurrence of the ionosphere storm to be predicted with leading time of 1-2 h for the positive ionosphere storm and 9-10 h for the negative ionosphere storm. The ionosphere storm probability model is validated using data for 2 intense and 20

  6. Transition probabilities of health states for workers in Malaysia using a Markov chain model

    NASA Astrophysics Data System (ADS)

    Samsuddin, Shamshimah; Ismail, Noriszura

    2017-04-01

    The aim of our study is to estimate the transition probabilities of health states for workers in Malaysia who contribute to the Employment Injury Scheme under the Social Security Organization Malaysia using the Markov chain model. Our study uses four states of health (active, temporary disability, permanent disability and death) based on the data collected from the longitudinal studies of workers in Malaysia for 5 years. The transition probabilities vary by health state, age and gender. The results show that men employees are more likely to have higher transition probabilities to any health state compared to women employees. The transition probabilities can be used to predict the future health of workers in terms of a function of current age, gender and health state.

  7. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data.

    PubMed

    Buderman, Frances E; Diefenbach, Duane R; Casalena, Mary Jo; Rosenberry, Christopher S; Wallingford, Bret D

    2014-04-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50-100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo, to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.

  8. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data

    USGS Publications Warehouse

    Buderman, Frances E.; Diefenbach, Duane R.; Casalena, Mary Jo; Rosenberry, Christopher S.; Wallingford, Bret D.

    2014-01-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo,to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.

  9. An accurate fatigue damage model for welded joints subjected to variable amplitude loading

    NASA Astrophysics Data System (ADS)

    Aeran, A.; Siriwardane, S. C.; Mikkelsen, O.; Langen, I.

    2017-12-01

    Researchers in the past have proposed several fatigue damage models to overcome the shortcomings of the commonly used Miner’s rule. However, requirements of material parameters or S-N curve modifications restricts their practical applications. Also, application of most of these models under variable amplitude loading conditions have not been found. To overcome these restrictions, a new fatigue damage model is proposed in this paper. The proposed model can be applied by practicing engineers using only the S-N curve given in the standard codes of practice. The model is verified with experimentally derived damage evolution curves for C 45 and 16 Mn and gives better agreement compared to previous models. The model predicted fatigue lives are also in better correlation with experimental results compared to previous models as shown in earlier published work by the authors. The proposed model is applied to welded joints subjected to variable amplitude loadings in this paper. The model given around 8% shorter fatigue lives compared to Eurocode given Miner’s rule. This shows the importance of applying accurate fatigue damage models for welded joints.

  10. Comparison of the Mortality Probability Admission Model III, National Quality Forum, and Acute Physiology and Chronic Health Evaluation IV hospital mortality models: implications for national benchmarking*.

    PubMed

    Kramer, Andrew A; Higgins, Thomas L; Zimmerman, Jack E

    2014-03-01

    To examine the accuracy of the original Mortality Probability Admission Model III, ICU Outcomes Model/National Quality Forum modification of Mortality Probability Admission Model III, and Acute Physiology and Chronic Health Evaluation IVa models for comparing observed and risk-adjusted hospital mortality predictions. Retrospective paired analyses of day 1 hospital mortality predictions using three prognostic models. Fifty-five ICUs at 38 U.S. hospitals from January 2008 to December 2012. Among 174,001 intensive care admissions, 109,926 met model inclusion criteria and 55,304 had data for mortality prediction using all three models. None. We compared patient exclusions and the discrimination, calibration, and accuracy for each model. Acute Physiology and Chronic Health Evaluation IVa excluded 10.7% of all patients, ICU Outcomes Model/National Quality Forum 20.1%, and Mortality Probability Admission Model III 24.1%. Discrimination of Acute Physiology and Chronic Health Evaluation IVa was superior with area under receiver operating curve (0.88) compared with Mortality Probability Admission Model III (0.81) and ICU Outcomes Model/National Quality Forum (0.80). Acute Physiology and Chronic Health Evaluation IVa was better calibrated (lowest Hosmer-Lemeshow statistic). The accuracy of Acute Physiology and Chronic Health Evaluation IVa was superior (adjusted Brier score = 31.0%) to that for Mortality Probability Admission Model III (16.1%) and ICU Outcomes Model/National Quality Forum (17.8%). Compared with observed mortality, Acute Physiology and Chronic Health Evaluation IVa overpredicted mortality by 1.5% and Mortality Probability Admission Model III by 3.1%; ICU Outcomes Model/National Quality Forum underpredicted mortality by 1.2%. Calibration curves showed that Acute Physiology and Chronic Health Evaluation performed well over the entire risk range, unlike the Mortality Probability Admission Model and ICU Outcomes Model/National Quality Forum models. Acute

  11. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    NASA Astrophysics Data System (ADS)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  12. Inferring species interactions through joint mark–recapture analysis

    USGS Publications Warehouse

    Yackulic, Charles B.; Korman, Josh; Yard, Michael D.; Dzul, Maria C.

    2018-01-01

    Introduced species are frequently implicated in declines of native species. In many cases, however, evidence linking introduced species to native declines is weak. Failure to make strong inferences regarding the role of introduced species can hamper attempts to predict population viability and delay effective management responses. For many species, mark–recapture analysis is the more rigorous form of demographic analysis. However, to our knowledge, there are no mark–recapture models that allow for joint modeling of interacting species. Here, we introduce a two‐species mark–recapture population model in which the vital rates (and capture probabilities) of one species are allowed to vary in response to the abundance of the other species. We use a simulation study to explore bias and choose an approach to model selection. We then use the model to investigate species interactions between endangered humpback chub (Gila cypha) and introduced rainbow trout (Oncorhynchus mykiss) in the Colorado River between 2009 and 2016. In particular, we test hypotheses about how two environmental factors (turbidity and temperature), intraspecific density dependence, and rainbow trout abundance are related to survival, growth, and capture of juvenile humpback chub. We also project the long‐term effects of different rainbow trout abundances on adult humpback chub abundances. Our simulation study suggests this approach has minimal bias under potentially challenging circumstances (i.e., low capture probabilities) that characterized our application and that model selection using indicator variables could reliably identify the true generating model even when process error was high. When the model was applied to rainbow trout and humpback chub, we identified negative relationships between rainbow trout abundance and the survival, growth, and capture probability of juvenile humpback chub. Effects on interspecific interactions on survival and capture probability were strongly

  13. A Numerical Study on Toppling Failure of a Jointed Rock Slope by Using the Distinct Lattice Spring Model

    NASA Astrophysics Data System (ADS)

    Lian, Ji-Jian; Li, Qin; Deng, Xi-Fei; Zhao, Gao-Feng; Chen, Zu-Yu

    2018-02-01

    In this work, toppling failure of a jointed rock slope is studied by using the distinct lattice spring model (DLSM). The gravity increase method (GIM) with a sub-step loading scheme is implemented in the DLSM to mimic the loading conditions of a centrifuge test. A classical centrifuge test for a jointed rock slope, previously simulated by the finite element method and the discrete element model, is simulated by using the GIM-DLSM. Reasonable boundary conditions are obtained through detailed comparisons among existing numerical solutions with experimental records. With calibrated boundary conditions, the influences of the tensional strength of the rock block, cohesion and friction angles of the joints, as well as the spacing and inclination angles of the joints, on the flexural toppling failure of the jointed rock slope are investigated by using the GIM-DLSM, leading to some insight into evaluating the state of flexural toppling failure for a jointed slope and effectively preventing the flexural toppling failure of jointed rock slopes.

  14. Tsunami probability in the Caribbean Region

    USGS Publications Warehouse

    Parsons, T.; Geist, E.L.

    2008-01-01

    We calculated tsunami runup probability (in excess of 0.5 m) at coastal sites throughout the Caribbean region. We applied a Poissonian probability model because of the variety of uncorrelated tsunami sources in the region. Coastlines were discretized into 20 km by 20 km cells, and the mean tsunami runup rate was determined for each cell. The remarkable ???500-year empirical record compiled by O'Loughlin and Lander (2003) was used to calculate an empirical tsunami probability map, the first of three constructed for this study. However, it is unclear whether the 500-year record is complete, so we conducted a seismic moment-balance exercise using a finite-element model of the Caribbean-North American plate boundaries and the earthquake catalog, and found that moment could be balanced if the seismic coupling coefficient is c = 0.32. Modeled moment release was therefore used to generate synthetic earthquake sequences to calculate 50 tsunami runup scenarios for 500-year periods. We made a second probability map from numerically-calculated runup rates in each cell. Differences between the first two probability maps based on empirical and numerical-modeled rates suggest that each captured different aspects of tsunami generation; the empirical model may be deficient in primary plate-boundary events, whereas numerical model rates lack backarc fault and landslide sources. We thus prepared a third probability map using Bayesian likelihood functions derived from the empirical and numerical rate models and their attendant uncertainty to weight a range of rates at each 20 km by 20 km coastal cell. Our best-estimate map gives a range of 30-year runup probability from 0 - 30% regionally. ?? irkhaueser 2008.

  15. A joint frailty-copula model between tumour progression and death for meta-analysis.

    PubMed

    Emura, Takeshi; Nakatochi, Masahiro; Murotani, Kenta; Rondeau, Virginie

    2017-12-01

    Dependent censoring often arises in biomedical studies when time to tumour progression (e.g., relapse of cancer) is censored by an informative terminal event (e.g., death). For meta-analysis combining existing studies, a joint survival model between tumour progression and death has been considered under semicompeting risks, which induces dependence through the study-specific frailty. Our paper here utilizes copulas to generalize the joint frailty model by introducing additional source of dependence arising from intra-subject association between tumour progression and death. The practical value of the new model is particularly evident for meta-analyses in which only a few covariates are consistently measured across studies and hence there exist residual dependence. The covariate effects are formulated through the Cox proportional hazards model, and the baseline hazards are nonparametrically modeled on a basis of splines. The estimator is then obtained by maximizing a penalized log-likelihood function. We also show that the present methodologies are easily modified for the competing risks or recurrent event data, and are generalized to accommodate left-truncation. Simulations are performed to examine the performance of the proposed estimator. The method is applied to a meta-analysis for assessing a recently suggested biomarker CXCL12 for survival in ovarian cancer patients. We implement our proposed methods in R joint.Cox package.

  16. Anticipating abrupt shifts in temporal evolution of probability of eruption

    NASA Astrophysics Data System (ADS)

    Rohmer, J.; Loschetter, A.

    2016-04-01

    Estimating the probability of eruption by jointly accounting for different sources of monitoring parameters over time is a key component for volcano risk management. In the present study, we are interested in the transition from a state of low-to-moderate probability value to a state of high probability value. By using the data of MESIMEX exercise at the Vesuvius volcano, we investigated the potential for time-varying indicators related to the correlation structure or to the variability of the probability time series for detecting in advance this critical transition. We found that changes in the power spectra and in the standard deviation estimated over a rolling time window both present an abrupt increase, which marks the approaching shift. Our numerical experiments revealed that the transition from an eruption probability of 10-15% to > 70% could be identified up to 1-3 h in advance. This additional lead time could be useful to place different key services (e.g., emergency services for vulnerable groups, commandeering additional transportation means, etc.) on a higher level of alert before the actual call for evacuation.

  17. New approach to probability estimate of femoral neck fracture by fall (Slovak regression model).

    PubMed

    Wendlova, J

    2009-01-01

    3,216 Slovak women with primary or secondary osteoporosis or osteopenia, aged 20-89 years, were examined with the bone densitometer DXA (dual energy X-ray absorptiometry, GE, Prodigy - Primo), x = 58.9, 95% C.I. (58.42; 59.38). The values of the following variables for each patient were measured: FSI (femur strength index), T-score total hip left, alpha angle - left, theta angle - left, HAL (hip axis length) left, BMI (body mass index) was calculated from the height and weight of the patients. Regression model determined the following order of independent variables according to the intensity of their influence upon the occurrence of values of dependent FSI variable: 1. BMI, 2. theta angle, 3. T-score total hip, 4. alpha angle, 5. HAL. The regression model equation, calculated from the variables monitored in the study, enables a doctor in praxis to determine the probability magnitude (absolute risk) for the occurrence of pathological value of FSI (FSI < 1) in the femoral neck area, i. e., allows for probability estimate of a femoral neck fracture by fall for Slovak women. 1. The Slovak regression model differs from regression models, published until now, in chosen independent variables and a dependent variable, belonging to biomechanical variables, characterising the bone quality. 2. The Slovak regression model excludes the inaccuracies of other models, which are not able to define precisely the current and past clinical condition of tested patients (e.g., to define the length and dose of exposure to risk factors). 3. The Slovak regression model opens the way to a new method of estimating the probability (absolute risk) or the odds for a femoral neck fracture by fall, based upon the bone quality determination. 4. It is assumed that the development will proceed by improving the methods enabling to measure the bone quality, determining the probability of fracture by fall (Tab. 6, Fig. 3, Ref. 22). Full Text (Free, PDF) www.bmj.sk.

  18. Identifying Chinese Microblog Users With High Suicide Probability Using Internet-Based Profile and Linguistic Features: Classification Model.

    PubMed

    Guan, Li; Hao, Bibo; Cheng, Qijin; Yip, Paul Sf; Zhu, Tingshao

    2015-01-01

    Traditional offline assessment of suicide probability is time consuming and difficult in convincing at-risk individuals to participate. Identifying individuals with high suicide probability through online social media has an advantage in its efficiency and potential to reach out to hidden individuals, yet little research has been focused on this specific field. The objective of this study was to apply two classification models, Simple Logistic Regression (SLR) and Random Forest (RF), to examine the feasibility and effectiveness of identifying high suicide possibility microblog users in China through profile and linguistic features extracted from Internet-based data. There were nine hundred and nine Chinese microblog users that completed an Internet survey, and those scoring one SD above the mean of the total Suicide Probability Scale (SPS) score, as well as one SD above the mean in each of the four subscale scores in the participant sample were labeled as high-risk individuals, respectively. Profile and linguistic features were fed into two machine learning algorithms (SLR and RF) to train the model that aims to identify high-risk individuals in general suicide probability and in its four dimensions. Models were trained and then tested by 5-fold cross validation; in which both training set and test set were generated under the stratified random sampling rule from the whole sample. There were three classic performance metrics (Precision, Recall, F1 measure) and a specifically defined metric "Screening Efficiency" that were adopted to evaluate model effectiveness. Classification performance was generally matched between SLR and RF. Given the best performance of the classification models, we were able to retrieve over 70% of the labeled high-risk individuals in overall suicide probability as well as in the four dimensions. Screening Efficiency of most models varied from 1/4 to 1/2. Precision of the models was generally below 30%. Individuals in China with high suicide

  19. The Failure Models of Lead Free Sn-3.0Ag-0.5Cu Solder Joint Reliability Under Low-G and High-G Drop Impact

    NASA Astrophysics Data System (ADS)

    Gu, Jian; Lei, YongPing; Lin, Jian; Fu, HanGuang; Wu, Zhongwei

    2017-02-01

    The reliability of Sn-3.0Ag-0.5Cu (SAC 305) solder joint under a broad level of drop impacts was studied. The failure performance of solder joint, failure probability and failure position were analyzed under two shock test conditions, i.e., 1000 g for 1 ms and 300 g for 2 ms. The stress distribution on the solder joint was calculated by ABAQUS. The results revealed that the dominant reason was the tension due to the difference in stiffness between the print circuit board and ball grid array, and the maximum tension of 121.1 MPa and 31.1 MPa, respectively, under both 1000 g or 300 g drop impact, was focused on the corner of the solder joint which was located in the outmost corner of the solder ball row. The failure modes were summarized into the following four modes: initiation and propagation through the (1) intermetallic compound layer, (2) Ni layer, (3) Cu pad, or (4) Sn-matrix. The outmost corner of the solder ball row had a high failure probability under both 1000 g and 300 g drop impact. The number of failures of solder ball under the 300 g drop impact was higher than that under the 1000 g drop impact. The characteristic drop values for failure were 41 and 15,199, respectively, following the statistics.

  20. MRI-based modeling for radiocarpal joint mechanics: validation criteria and results for four specimen-specific models.

    PubMed

    Fischer, Kenneth J; Johnson, Joshua E; Waller, Alexander J; McIff, Terence E; Toby, E Bruce; Bilgen, Mehmet

    2011-10-01

    The objective of this study was to validate the MRI-based joint contact modeling methodology in the radiocarpal joints by comparison of model results with invasive specimen-specific radiocarpal contact measurements from four cadaver experiments. We used a single validation criterion for multiple outcome measures to characterize the utility and overall validity of the modeling approach. For each experiment, a Pressurex film and a Tekscan sensor were sequentially placed into the radiocarpal joints during simulated grasp. Computer models were constructed based on MRI visualization of the cadaver specimens without load. Images were also acquired during the loaded configuration used with the direct experimental measurements. Geometric surface models of the radius, scaphoid and lunate (including cartilage) were constructed from the images acquired without the load. The carpal bone motions from the unloaded state to the loaded state were determined using a series of 3D image registrations. Cartilage thickness was assumed uniform at 1.0 mm with an effective compressive modulus of 4 MPa. Validation was based on experimental versus model contact area, contact force, average contact pressure and peak contact pressure for the radioscaphoid and radiolunate articulations. Contact area was also measured directly from images acquired under load and compared to the experimental and model data. Qualitatively, there was good correspondence between the MRI-based model data and experimental data, with consistent relative size, shape and location of radioscaphoid and radiolunate contact regions. Quantitative data from the model generally compared well with the experimental data for all specimens. Contact area from the MRI-based model was very similar to the contact area measured directly from the images. For all outcome measures except average and peak pressures, at least two specimen models met the validation criteria with respect to experimental measurements for both articulations

  1. Damage prognosis of adhesively-bonded joints in laminated composite structural components of unmanned aerial vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrar, Charles R; Gobbato, Maurizio; Conte, Joel

    2009-01-01

    The extensive use of lightweight advanced composite materials in unmanned aerial vehicles (UAVs) drastically increases the sensitivity to both fatigue- and impact-induced damage of their critical structural components (e.g., wings and tail stabilizers) during service life. The spar-to-skin adhesive joints are considered one of the most fatigue sensitive subcomponents of a lightweight UAV composite wing with damage progressively evolving from the wing root. This paper presents a comprehensive probabilistic methodology for predicting the remaining service life of adhesively-bonded joints in laminated composite structural components of UAVs. Non-destructive evaluation techniques and Bayesian inference are used to (i) assess the current statemore » of damage of the system and, (ii) update the probability distribution of the damage extent at various locations. A probabilistic model for future loads and a mechanics-based damage model are then used to stochastically propagate damage through the joint. Combined local (e.g., exceedance of a critical damage size) and global (e.g.. flutter instability) failure criteria are finally used to compute the probability of component failure at future times. The applicability and the partial validation of the proposed methodology are then briefly discussed by analyzing the debonding propagation, along a pre-defined adhesive interface, in a simply supported laminated composite beam with solid rectangular cross section, subjected to a concentrated load applied at mid-span. A specially developed Eliler-Bernoulli beam finite element with interlaminar slip along the damageable interface is used in combination with a cohesive zone model to study the fatigue-induced degradation in the adhesive material. The preliminary numerical results presented are promising for the future validation of the methodology.« less

  2. HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA

    EPA Science Inventory

    Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...

  3. Pre-Service Mathematics Teachers' Use of Probability Models in Making Informal Inferences about a Chance Game

    ERIC Educational Resources Information Center

    Kazak, Sibel; Pratt, Dave

    2017-01-01

    This study considers probability models as tools for both making informal statistical inferences and building stronger conceptual connections between data and chance topics in teaching statistics. In this paper, we aim to explore pre-service mathematics teachers' use of probability models for a chance game, where the sum of two dice matters in…

  4. A Model-Free Machine Learning Method for Risk Classification and Survival Probability Prediction.

    PubMed

    Geng, Yuan; Lu, Wenbin; Zhang, Hao Helen

    2014-01-01

    Risk classification and survival probability prediction are two major goals in survival data analysis since they play an important role in patients' risk stratification, long-term diagnosis, and treatment selection. In this article, we propose a new model-free machine learning framework for risk classification and survival probability prediction based on weighted support vector machines. The new procedure does not require any specific parametric or semiparametric model assumption on data, and is therefore capable of capturing nonlinear covariate effects. We use numerous simulation examples to demonstrate finite sample performance of the proposed method under various settings. Applications to a glioma tumor data and a breast cancer gene expression survival data are shown to illustrate the new methodology in real data analysis.

  5. Exploring the Subtleties of Inverse Probability Weighting and Marginal Structural Models.

    PubMed

    Breskin, Alexander; Cole, Stephen R; Westreich, Daniel

    2018-05-01

    Since being introduced to epidemiology in 2000, marginal structural models have become a commonly used method for causal inference in a wide range of epidemiologic settings. In this brief report, we aim to explore three subtleties of marginal structural models. First, we distinguish marginal structural models from the inverse probability weighting estimator, and we emphasize that marginal structural models are not only for longitudinal exposures. Second, we explore the meaning of the word "marginal" in "marginal structural model." Finally, we show that the specification of a marginal structural model can have important implications for the interpretation of its parameters. Each of these concepts have important implications for the use and understanding of marginal structural models, and thus providing detailed explanations of them may lead to better practices for the field of epidemiology.

  6. Aggregate and Individual Replication Probability within an Explicit Model of the Research Process

    ERIC Educational Resources Information Center

    Miller, Jeff; Schwarz, Wolf

    2011-01-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…

  7. Time-dependent earthquake probabilities

    USGS Publications Warehouse

    Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.

    2005-01-01

    We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.

  8. A Joint Modeling Approach for Reaction Time and Accuracy in Psycholinguistic Experiments

    ERIC Educational Resources Information Center

    Loeys, T.; Rosseel, Y.; Baten, K.

    2011-01-01

    In the psycholinguistic literature, reaction times and accuracy can be analyzed separately using mixed (logistic) effects models with crossed random effects for item and subject. Given the potential correlation between these two outcomes, a joint model for the reaction time and accuracy may provide further insight. In this paper, a Bayesian…

  9. Physical Modeling of Shear Behavior of Infilled Rock Joints Under CNL and CNS Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Shrivastava, Amit Kumar; Rao, K. Seshagiri

    2018-01-01

    Despite their frequent natural occurrence, filled discontinuities under constant normal stiffness (CNS) boundary conditions have been studied much less systematically, perhaps because of the difficulties arising from the increased number of variable parameters. Because of the lack of reliable and realistic theoretical or empirical relations and the difficulties in obtaining and testing representative samples, engineers rely on judgment and often consider the shear strength of the infilled material itself as shear strength of rock joints. This assumption leads to uneconomical and also sometimes the unsafe design of underground structures, slopes, rock-socketed piles and foundations. To study the effect of infill on the shear behavior of rock joints, tests were performed on the modeled infilled rock joint having different joint roughness under constant normal load (CNL) and CNS boundary conditions at various initial normal stress and varying thickness of the infilled material. The test results indicate that shear strength decreases with an increase in t/ a ratio for both CNL and CNS conditions, but the reduction in shear strength is more for CNL than for CNS condition for a given initial normal stress. The detailed account of the effect of thickness of infilled material on shear and deformation behavior of infilled rock joint is discussed in this paper, and a model is proposed to predict shear strength of infilled rock joint.

  10. A Joint Model for Longitudinal Measurements and Survival Data in the Presence of Multiple Failure Types

    PubMed Central

    Elashoff, Robert M.; Li, Gang; Li, Ning

    2009-01-01

    Summary In this article we study a joint model for longitudinal measurements and competing risks survival data. Our joint model provides a flexible approach to handle possible nonignorable missing data in the longitudinal measurements due to dropout. It is also an extension of previous joint models with a single failure type, offering a possible way to model informatively censored events as a competing risk. Our model consists of a linear mixed effects submodel for the longitudinal outcome and a proportional cause-specific hazards frailty submodel (Prentice et al., 1978, Biometrics 34, 541-554) for the competing risks survival data, linked together by some latent random effects. We propose to obtain the maximum likelihood estimates of the parameters by an expectation maximization (EM) algorithm and estimate their standard errors using a profile likelihood method. The developed method works well in our simulation studies and is applied to a clinical trial for the scleroderma lung disease. PMID:18162112

  11. Asymmetric biotic interactions and abiotic niche differences revealed by a dynamic joint species distribution model.

    PubMed

    Lany, Nina K; Zarnetske, Phoebe L; Schliep, Erin M; Schaeffer, Robert N; Orians, Colin M; Orwig, David A; Preisser, Evan L

    2018-05-01

    A species' distribution and abundance are determined by abiotic conditions and biotic interactions with other species in the community. Most species distribution models correlate the occurrence of a single species with environmental variables only, and leave out biotic interactions. To test the importance of biotic interactions on occurrence and abundance, we compared a multivariate spatiotemporal model of the joint abundance of two invasive insects that share a host plant, hemlock woolly adelgid (HWA; Adelges tsugae) and elongate hemlock scale (EHS; Fiorina externa), to independent models that do not account for dependence among co-occurring species. The joint model revealed that HWA responded more strongly to abiotic conditions than EHS. Additionally, HWA appeared to predispose stands to subsequent increase of EHS, but HWA abundance was not strongly dependent on EHS abundance. This study demonstrates how incorporating spatial and temporal dependence into a species distribution model can reveal the dependence of a species' abundance on other species in the community. Accounting for dependence among co-occurring species with a joint distribution model can also improve estimation of the abiotic niche for species affected by interspecific interactions. © 2018 by the Ecological Society of America.

  12. Sampling the stream landscape: Improving the applicability of an ecoregion-level capture probability model for stream fishes

    USGS Publications Warehouse

    Mollenhauer, Robert; Mouser, Joshua B.; Brewer, Shannon K.

    2018-01-01

    Temporal and spatial variability in streams result in heterogeneous gear capture probability (i.e., the proportion of available individuals identified) that confounds interpretation of data used to monitor fish abundance. We modeled tow-barge electrofishing capture probability at multiple spatial scales for nine Ozark Highland stream fishes. In addition to fish size, we identified seven reach-scale environmental characteristics associated with variable capture probability: stream discharge, water depth, conductivity, water clarity, emergent vegetation, wetted width–depth ratio, and proportion of riffle habitat. The magnitude of the relationship between capture probability and both discharge and depth varied among stream fishes. We also identified lithological characteristics among stream segments as a coarse-scale source of variable capture probability. The resulting capture probability model can be used to adjust catch data and derive reach-scale absolute abundance estimates across a wide range of sampling conditions with similar effort as used in more traditional fisheries surveys (i.e., catch per unit effort). Adjusting catch data based on variable capture probability improves the comparability of data sets, thus promoting both well-informed conservation and management decisions and advances in stream-fish ecology.

  13. Electrical stimulation on joint contracture: an experiment in rat model with direct current.

    PubMed

    Akai, M; Shirasaki, Y; Tateishi, T

    1997-04-01

    To examine whether electrical stimulation could decrease the degree of joint stiffness in a rat lower extremity model. Rat knee joints were surgically immobilized in a flexed position for 3 weeks. Two groups of rats were stimulated with 20 microA and 50 microA constant direct current. Another group had surgical intervention and sham electrodes without electricity. The hind leg was extirpated and prepared for a sample with the femur-knee joint-tibia unit. Recording the knee flexion angle with extension torque, the degree of joint contracture was assessed biomechanically by measuring the bone-joint-bone sample as a cantilever. Measurement was performed with (1) spectral analysis of transfer function measurement using random mechanical noise with frequency range from 1 to 50Hz, and (2) dynamic stiffness and loss tangent with steady-state sinusoidal excitation (11 and 35Hz). The results showed that no significant difference or trend was found in vibration analysis among three groups. However, spectral analysis of transfer function measurement revealed more deformation against load, and more viscous nature in the stimulation groups, especially in low frequency band, than in the sham group. Electrical stimulation with constant direct current has a possibility of reducing the degree of joint contracture.

  14. Electromigration analysis of solder joints under ac load: A mean time to failure model

    NASA Astrophysics Data System (ADS)

    Yao, Wei; Basaran, Cemal

    2012-03-01

    In this study, alternating current (ac) electromigration (EM) degradation simulations were carried out for Sn95.5%Ag4.0%Cu0.5 (SAC405- by weight) solder joints. Mass transport analysis was conducted with viscoplastic material properties for quantifying damage mechanism in solder joints. Square, sine, and triangle current wave forms ac were used as input signals. dc and pulsed dc (PDC) electromigration analysis were conducted for comparison purposes. The maximum current density ranged from 2.2×106A/cm2 to 5.0×106A/cm2, frequency ranged from 0.05 Hz to 5 Hz with ambient temperature varying from 350 K to 450 K. Because the room temperature is nearly two-thirds of SAC solder joint's melting point on absolute temperature scale (494.15 K), viscoplastic material model is essential. Entropy based damage evolution model was used to investigate mean time to failure (MTF) behavior of solder joints subjected to ac stressing. It was observed that MTF was inversely proportional to ambient temperature T1.1 in Celsius and also inversely proportional to current density j0.27 in A/cm2. Higher frequency will lead to a shorter lifetime with in the frequency range we studied, and a relationship is proposed as MTF∝f-0.41. Lifetime of a solder joint subjected to ac is longer compared with dc and PDC loading conditions. By introducing frequency, ambient temperature and current density dependency terms, a modified MTTF equation was proposed for solder joints subjected to ac current stressing.

  15. Modeling co-occurrence of northern spotted and barred owls: accounting for detection probability differences

    USGS Publications Warehouse

    Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.

    2009-01-01

    Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.

  16. Probability-based collaborative filtering model for predicting gene-disease associations.

    PubMed

    Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan

    2017-12-28

    Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene-disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our model. Firstly, on the basis of a typical latent factorization model, we propose model I with an average heterogeneous regularization. Secondly, we develop modified model II with personal heterogeneous regularization to enhance the accuracy of aforementioned models. In this model, vector space similarity or Pearson correlation coefficient metrics and data on related species are also used. We compared the results of PCFM with the results of four state-of-arts approaches. The results show that PCFM performs better than other advanced approaches. PCFM model can be leveraged for predictions of disease genes, especially for new human genes or diseases with no known relationships.

  17. Estimation of State Transition Probabilities: A Neural Network Model

    NASA Astrophysics Data System (ADS)

    Saito, Hiroshi; Takiyama, Ken; Okada, Masato

    2015-12-01

    Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.

  18. A study of cervical spine kinematics and joint capsule strain in rear impacts using a human FE model.

    PubMed

    Kitagawa, Yuichi; Yasuki, Tsuyoshi; Hasegawa, Junji

    2006-11-01

    Many efforts have been made to understand the mechanism of whiplash injury. Recently, the cervical facet joint capsules have been focused on as a potential site of injury. An experimental approach has been taken to analyze the vertebral motion and to estimate joint capsule stretch that was thought to be a potential cause of pain. The purpose of this study is to analyze the kinematics of the cervical facet joint using a human FE model in order to better understand the injury mechanism. The Total Human Model for Safety (THUMS) was used to visually analyze the local and global kinematics of the spine. Soft tissues in the neck were newly modeled and introduced into THUMS for estimating the loading level in rear impacts. The model was first validated against human test data in the literature by comparing vertebrae motion as well as head and neck responses. Joint capsule strain was estimated from a maximum principal strain output from the elements representing the capsule tissues. A rear-end collision was then simulated using THUMS and a prototype seat model, assuming a delta-V of 25 km/h. The trajectory of the vertebrae was analyzed in a local coordinate system defined along the joint surface. Strain growth in the joint capsules was explained, as related to contact events between the occupant and the seat. A new seat concept was proposed to help lessen the loading level to the neck soft tissues. The foam material of the seat back was softened, the initial gap behind the head was reduced and the head restraint was stiffened for firm support. The lower seat back frame was also reinforced to withstand the impact severity at the given delta-V. Another rear impact simulation was conducted using the new seat concept model to examine the effectiveness of the new concept. The joint capsule strain was found to be relatively lower with the new seat concept. The study also discusses the influence of seat parameters to the vertebral motion and the resultant strain in the joint

  19. Arthroscopic Management of Scaphoid-Trapezium-Trapezoid Joint Arthritis.

    PubMed

    Pegoli, Loris; Pozzi, Alessandro

    2017-11-01

    Scaphoid-trapezium-trapezoid (STT) joint arthritis is a common condition consisting of pain on the radial side of the wrist and base of the thumb, swelling, and tenderness over the STT joint. Common symptoms are loss of grip strength and thumb function. There are several treatments, from symptomatic conservative treatment to surgical solutions, such as arthrodesis, arthroplasties, and prosthesis implant. The role of arthroscopy has grown and is probably the best treatment of this condition. Advantages of arthroscopic management of STT arthritis are faster recovery, better view of the joint during surgery, and possibility of creating less damage to the capsular and ligamentous structures. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Wildland fire probabilities estimated from weather model-deduced monthly mean fire danger indices

    Treesearch

    Haiganoush K. Preisler; Shyh-Chin Chen; Francis Fujioka; John W. Benoit; Anthony L. Westerling

    2008-01-01

    The National Fire Danger Rating System indices deduced from a regional simulation weather model were used to estimate probabilities and numbers of large fire events on monthly and 1-degree grid scales. The weather model simulations and forecasts are ongoing experimental products from the Experimental Climate Prediction Center at the Scripps Institution of Oceanography...

  1. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  2. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  3. Inverse-Probability-Weighted Estimation for Monotone and Nonmonotone Missing Data.

    PubMed

    Sun, BaoLuo; Perkins, Neil J; Cole, Stephen R; Harel, Ofer; Mitchell, Emily M; Schisterman, Enrique F; Tchetgen Tchetgen, Eric J

    2018-03-01

    Missing data is a common occurrence in epidemiologic research. In this paper, 3 data sets with induced missing values from the Collaborative Perinatal Project, a multisite US study conducted from 1959 to 1974, are provided as examples of prototypical epidemiologic studies with missing data. Our goal was to estimate the association of maternal smoking behavior with spontaneous abortion while adjusting for numerous confounders. At the same time, we did not necessarily wish to evaluate the joint distribution among potentially unobserved covariates, which is seldom the subject of substantive scientific interest. The inverse probability weighting (IPW) approach preserves the semiparametric structure of the underlying model of substantive interest and clearly separates the model of substantive interest from the model used to account for the missing data. However, IPW often will not result in valid inference if the missing-data pattern is nonmonotone, even if the data are missing at random. We describe a recently proposed approach to modeling nonmonotone missing-data mechanisms under missingness at random to use in constructing the weights in IPW complete-case estimation, and we illustrate the approach using 3 data sets described in a companion article (Am J Epidemiol. 2018;187(3):568-575).

  4. A spray flamelet/progress variable approach combined with a transported joint PDF model for turbulent spray flames

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Olguin, Hernan; Gutheil, Eva

    2017-05-01

    A spray flamelet/progress variable approach is developed for use in spray combustion with partly pre-vaporised liquid fuel, where a laminar spray flamelet library accounts for evaporation within the laminar flame structures. For this purpose, the standard spray flamelet formulation for pure evaporating liquid fuel and oxidiser is extended by a chemical reaction progress variable in both the turbulent spray flame model and the laminar spray flame structures, in order to account for the effect of pre-vaporised liquid fuel for instance through use of a pilot flame. This new approach is combined with a transported joint probability density function (PDF) method for the simulation of a turbulent piloted ethanol/air spray flame, and the extension requires the formulation of a joint three-variate PDF depending on the gas phase mixture fraction, the chemical reaction progress variable, and gas enthalpy. The molecular mixing is modelled with the extended interaction-by-exchange-with-the-mean (IEM) model, where source terms account for spray evaporation and heat exchange due to evaporation as well as the chemical reaction rate for the chemical reaction progress variable. This is the first formulation using a spray flamelet model considering both evaporation and partly pre-vaporised liquid fuel within the laminar spray flamelets. Results with this new formulation show good agreement with the experimental data provided by A.R. Masri, Sydney, Australia. The analysis of the Lagrangian statistics of the gas temperature and the OH mass fraction indicates that partially premixed combustion prevails near the nozzle exit of the spray, whereas further downstream, the non-premixed flame is promoted towards the inner rich-side of the spray jet since the pilot flame heats up the premixed inner spray zone. In summary, the simulation with the new formulation considering the reaction progress variable shows good performance, greatly improving the standard formulation, and it provides new

  5. Maximum aposteriori joint source/channel coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Gibson, Jerry D.

    1991-01-01

    A maximum aposteriori probability (MAP) approach to joint source/channel coder design is presented in this paper. This method attempts to explore a technique for designing joint source/channel codes, rather than ways of distributing bits between source coders and channel coders. For a nonideal source coder, MAP arguments are used to design a decoder which takes advantage of redundancy in the source coder output to perform error correction. Once the decoder is obtained, it is analyzed with the purpose of obtaining 'desirable properties' of the channel input sequence for improving overall system performance. Finally, an encoder design which incorporates these properties is proposed.

  6. A fault tree model to assess probability of contaminant discharge from shipwrecks.

    PubMed

    Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I

    2014-11-15

    Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Modelling the strength of an aluminium-steel nailed joint

    NASA Astrophysics Data System (ADS)

    Goldspiegel, Fabien; Mocellin, Katia; Michel, Philippe

    2018-05-01

    For multi-material applications in automotive industry, a cast aluminium (upper layer) and dual-phase steel (lower layer) superposition joined with High-Speed Nailing process is investigated through an experimental vs numerical framework. Using FORGE® finite-element software, results from joining simulations have been inserted into models in charge of nailed-joint mechanical testings. Numerical Shear and Cross-tensile tests are compared to experimental ones to discuss discrepancy and possible improvements.

  8. Dynamic simulation of knee-joint loading during gait using force-feedback control and surrogate contact modelling.

    PubMed

    Walter, Jonathan P; Pandy, Marcus G

    2017-10-01

    The aim of this study was to perform multi-body, muscle-driven, forward-dynamics simulations of human gait using a 6-degree-of-freedom (6-DOF) model of the knee in tandem with a surrogate model of articular contact and force control. A forward-dynamics simulation incorporating position, velocity and contact force-feedback control (FFC) was used to track full-body motion capture data recorded for multiple trials of level walking and stair descent performed by two individuals with instrumented knee implants. Tibiofemoral contact force errors for FFC were compared against those obtained from a standard computed muscle control algorithm (CMC) with a 6-DOF knee contact model (CMC6); CMC with a 1-DOF translating hinge-knee model (CMC1); and static optimization with a 1-DOF translating hinge-knee model (SO). Tibiofemoral joint loads predicted by FFC and CMC6 were comparable for level walking, however FFC produced more accurate results for stair descent. SO yielded reasonable predictions of joint contact loading for level walking but significant differences between model and experiment were observed for stair descent. CMC1 produced the least accurate predictions of tibiofemoral contact loads for both tasks. Our findings suggest that reliable estimates of knee-joint loading may be obtained by incorporating position, velocity and force-feedback control with a multi-DOF model of joint contact in a forward-dynamics simulation of gait. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. A joint-space numerical model of metabolic energy expenditure for human multibody dynamic system.

    PubMed

    Kim, Joo H; Roberts, Dustyn

    2015-09-01

    Metabolic energy expenditure (MEE) is a critical performance measure of human motion. In this study, a general joint-space numerical model of MEE is derived by integrating the laws of thermodynamics and principles of multibody system dynamics, which can evaluate MEE without the limitations inherent in experimental measurements (phase delays, steady state and task restrictions, and limited range of motion) or muscle-space models (complexities and indeterminacies from excessive DOFs, contacts and wrapping interactions, and reliance on in vitro parameters). Muscle energetic components are mapped to the joint space, in which the MEE model is formulated. A constrained multi-objective optimization algorithm is established to estimate the model parameters from experimental walking data also used for initial validation. The joint-space parameters estimated directly from active subjects provide reliable MEE estimates with a mean absolute error of 3.6 ± 3.6% relative to validation values, which can be used to evaluate MEE for complex non-periodic tasks that may not be experimentally verifiable. This model also enables real-time calculations of instantaneous MEE rate as a function of time for transient evaluations. Although experimental measurements may not be completely replaced by model evaluations, predicted quantities can be used as strong complements to increase reliability of the results and yield unique insights for various applications. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Study on Effects of the Stochastic Delay Probability for 1d CA Model of Traffic Flow

    NASA Astrophysics Data System (ADS)

    Xue, Yu; Chen, Yan-Hong; Kong, Ling-Jiang

    Considering the effects of different factors on the stochastic delay probability, the delay probability has been classified into three cases. The first case corresponding to the brake state has a large delay probability if the anticipant velocity is larger than the gap between the successive cars. The second one corresponding to the following-the-leader rule has intermediate delay probability if the anticipant velocity is equal to the gap. Finally, the third case is the acceleration, which has minimum delay probability. The fundamental diagram obtained by numerical simulation shows the different properties compared to that by the NaSch model, in which there exist two different regions, corresponding to the coexistence state, and jamming state respectively.

  11. Effect of intraarticular tramadol administration in the rat model of knee joint inflammation.

    PubMed

    Garlicki, Jarosław; Dorazil-Dudzik, Magdalena; Wordliczek, Jerzy; Przewłocka, Barbara

    2006-01-01

    Local administration of exogenous opioids may cause effective analgesia without adverse symptoms from the central nervous system. Experiments show that peripheral antinociceptive effect of opioids is observed especially in inflammatory pain. The aim of the research was to estimate the effect of tramadol on nociceptive process at the level of peripheral nervous system, after its local administration in the model of knee joint inflammation. Tramadol was administered intraarticulary into the rat knee joint, before the inflammation as a preemptive analgesia and, for comparison, after the intraarticular injection of carrageenan. The research determined the influence of tramadol injection on pain threshold for thermal stimuli, development of inflammatory processes using the measurement of joint edema and motor function following the induction of knee joint inflammation in the rat. Functional assessment of knee joint with inflammation, in terms of rats' mobility and body position as well as joint loading and mobility were studied. The results of the experiments show that local administration of tramadol induces antinociceptive effect. The effect of tramadol, which elicits also a decrease in inflammatory edema, appears not only after its administration after carrageenan when inflammation was already present, but also in the case of its injection prior to carrageenan in the scheme of preemptive analgesia. The results of the described research show that not only morphine but also another opioid, tramadol, widely used in clinical practice, inhibits nociception, edema and functional impairment of the paw after its local application directly to the inflamed knee joint.

  12. Modeling the effect of preexisting joints on normal fault geometries using a brittle and cohesive material

    NASA Astrophysics Data System (ADS)

    Kettermann, M.; van Gent, H. W.; Urai, J. L.

    2012-04-01

    Brittle rocks, such as for example those hosting many carbonate or sandstone reservoirs, are often affected by different kinds of fractures that influence each other. Understanding the effects of these interactions on fault geometries and the formation of cavities and potential fluid pathways might be useful for reservoir quality prediction and production. Analogue modeling has proven to be a useful tool to study faulting processes, although usually the used materials do not provide cohesion and tensile strength, which are essential to create open fractures. Therefore, very fine-grained, cohesive, hemihydrate powder was used for our experiments. The mechanical properties of the material are scaling well for natural prototypes. Due to the fine grain size structures are preserved in in great detail. The used deformation box allows the formation of a half-graben and has initial dimensions of 30 cm width, 28 cm length and 20 cm height. The maximum dip-slip along the 60° dipping predefined basement fault is 4.5 cm and was fully used in all experiments. To setup open joints prior to faulting, sheets of paper placed vertically within the box to a depth of about 5 cm from top. The powder was then sieved into the box, embedding the paper almost entirely. Finally strings were used to remove the paper carefully, leaving open voids. Using this method allows the creation of cohesionless open joints while ensuring a minimum impact on the sensitive surrounding material. The presented series of experiments aims to investigate the effect of different angles between the strike of a rigid basement fault and a distinct joint set. All experiments were performed with a joint spacing of 2.5 cm and the fault-joint angles incrementally covered 0°, 4°, 8°, 12°, 16°, 20° and 25°. During the deformation time lapse photography from the top and side captured every structural change and provided data for post-processing analysis using particle imaging velocimetry (PIV). Additionally

  13. Accuracy of clinicians and models for estimating the probability that a pulmonary nodule is malignant.

    PubMed

    Balekian, Alex A; Silvestri, Gerard A; Simkovich, Suzanne M; Mestaz, Peter J; Sanders, Gillian D; Daniel, Jamie; Porcel, Jackie; Gould, Michael K

    2013-12-01

    Management of pulmonary nodules depends critically on the probability of malignancy. Models to estimate probability have been developed and validated, but most clinicians rely on judgment. The aim of this study was to compare the accuracy of clinical judgment with that of two prediction models. Physician participants reviewed up to five clinical vignettes, selected at random from a larger pool of 35 vignettes, all based on actual patients with lung nodules of known final diagnosis. Vignettes included clinical information and a representative slice from computed tomography. Clinicians estimated the probability of malignancy for each vignette. To examine agreement with models, we calculated intraclass correlation coefficients (ICC) and kappa statistics. To examine accuracy, we compared areas under the receiver operator characteristic curve (AUC). Thirty-six participants completed 179 vignettes, 47% of which described patients with malignant nodules. Agreement between participants and models was fair for the Mayo Clinic model (ICC, 0.37; 95% confidence interval [CI], 0.23-0.50) and moderate for the Veterans Affairs model (ICC, 0.46; 95% CI, 0.34-0.57). There was no difference in accuracy between participants (AUC, 0.70; 95% CI, 0.62-0.77) and the Mayo Clinic model (AUC, 0.71; 95% CI, 0.62-0.80; P = 0.90) or the Veterans Affairs model (AUC, 0.72; 95% CI, 0.64-0.80; P = 0.54). In this vignette-based study, clinical judgment and models appeared to have similar accuracy for lung nodule characterization, but agreement between judgment and the models was modest, suggesting that qualitative and quantitative approaches may provide complementary information.

  14. Lane detection based on color probability model and fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Yu, Yang; Jo, Kang-Hyun

    2018-04-01

    In the vehicle driver assistance systems, the accuracy and speed of lane line detection are the most important. This paper is based on color probability model and Fuzzy Local Information C-Means (FLICM) clustering algorithm. The Hough transform and the constraints of structural road are used to detect the lane line accurately. The global map of the lane line is drawn by the lane curve fitting equation. The experimental results show that the algorithm has good robustness.

  15. Time-dependent landslide probability mapping

    USGS Publications Warehouse

    Campbell, Russell H.; Bernknopf, Richard L.; ,

    1993-01-01

    Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.

  16. Evidence of cartilage repair by joint distraction in a canine model of osteoarthritis.

    PubMed

    Wiegant, Karen; Intema, Femke; van Roermund, Peter M; Barten-van Rijbroek, Angelique D; Doornebal, Arie; Hazewinkel, Herman A W; Lafeber, Floris P J G; Mastbergen, Simon C

    2015-02-01

    Knee osteoarthritis (OA) is a degenerative joint disorder characterized by cartilage, bone, and synovial tissue changes that lead to pain and functional impairment. Joint distraction is a treatment that provides long-term improvement in pain and function accompanied by cartilage repair, as evaluated indirectly by imaging studies and measurement of biochemical markers. The purpose of this study was to evaluate cartilage tissue repair directly by histologic and biochemical assessments after joint distraction treatment. In 27 dogs, OA was induced in the right knee joint (groove model; surgical damage to the femoral cartilage). After 10 weeks of OA development, the animals were randomized to 1 of 3 groups. Two groups were fitted with an external fixator, which they wore for a subsequent 10 weeks (one group with and one without joint distraction), and the third group had no external fixation (OA control group). Pain/function was studied by force plate analysis. Cartilage integrity and chondrocyte activity of the surgically untouched tibial plateaus were analyzed 25 weeks after removal of the fixator. Changes in force plate analysis values between the different treatment groups were not conclusive. Features of OA were present in the OA control group, in contrast to the generally less severe damage after joint distraction. Those treated with joint distraction had lower macroscopic and histologic damage scores, higher proteoglycan content, better retention of newly formed proteoglycans, and less collagen damage. In the fixator group without distraction, similarly diminished joint damage was found, although it was less pronounced. Joint distraction as a treatment of experimentally induced OA results in cartilage repair activity, which corroborates the structural observations of cartilage repair indicated by surrogate markers in humans. Copyright © 2015 by the American College of Rheumatology.

  17. Dynamic Analyses Including Joints Of Truss Structures

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith

    1991-01-01

    Method for mathematically modeling joints to assess influences of joints on dynamic response of truss structures developed in study. Only structures with low-frequency oscillations considered; only Coulomb friction and viscous damping included in analysis. Focus of effort to obtain finite-element mathematical models of joints exhibiting load-vs.-deflection behavior similar to measured load-vs.-deflection behavior of real joints. Experiments performed to determine stiffness and damping nonlinearities typical of joint hardware. Algorithm for computing coefficients of analytical joint models based on test data developed to enable study of linear and nonlinear effects of joints on global structural response. Besides intended application to large space structures, applications in nonaerospace community include ground-based antennas and earthquake-resistant steel-framed buildings.

  18. Identifying Chinese Microblog Users With High Suicide Probability Using Internet-Based Profile and Linguistic Features: Classification Model

    PubMed Central

    Guan, Li; Hao, Bibo; Cheng, Qijin; Yip, Paul SF

    2015-01-01

    Background Traditional offline assessment of suicide probability is time consuming and difficult in convincing at-risk individuals to participate. Identifying individuals with high suicide probability through online social media has an advantage in its efficiency and potential to reach out to hidden individuals, yet little research has been focused on this specific field. Objective The objective of this study was to apply two classification models, Simple Logistic Regression (SLR) and Random Forest (RF), to examine the feasibility and effectiveness of identifying high suicide possibility microblog users in China through profile and linguistic features extracted from Internet-based data. Methods There were nine hundred and nine Chinese microblog users that completed an Internet survey, and those scoring one SD above the mean of the total Suicide Probability Scale (SPS) score, as well as one SD above the mean in each of the four subscale scores in the participant sample were labeled as high-risk individuals, respectively. Profile and linguistic features were fed into two machine learning algorithms (SLR and RF) to train the model that aims to identify high-risk individuals in general suicide probability and in its four dimensions. Models were trained and then tested by 5-fold cross validation; in which both training set and test set were generated under the stratified random sampling rule from the whole sample. There were three classic performance metrics (Precision, Recall, F1 measure) and a specifically defined metric “Screening Efficiency” that were adopted to evaluate model effectiveness. Results Classification performance was generally matched between SLR and RF. Given the best performance of the classification models, we were able to retrieve over 70% of the labeled high-risk individuals in overall suicide probability as well as in the four dimensions. Screening Efficiency of most models varied from 1/4 to 1/2. Precision of the models was generally below 30

  19. Probability of atrial fibrillation after ablation: Using a parametric nonlinear temporal decomposition mixed effects model.

    PubMed

    Rajeswaran, Jeevanantham; Blackstone, Eugene H; Ehrlinger, John; Li, Liang; Ishwaran, Hemant; Parides, Michael K

    2018-01-01

    Atrial fibrillation is an arrhythmic disorder where the electrical signals of the heart become irregular. The probability of atrial fibrillation (binary response) is often time varying in a structured fashion, as is the influence of associated risk factors. A generalized nonlinear mixed effects model is presented to estimate the time-related probability of atrial fibrillation using a temporal decomposition approach to reveal the pattern of the probability of atrial fibrillation and their determinants. This methodology generalizes to patient-specific analysis of longitudinal binary data with possibly time-varying effects of covariates and with different patient-specific random effects influencing different temporal phases. The motivation and application of this model is illustrated using longitudinally measured atrial fibrillation data obtained through weekly trans-telephonic monitoring from an NIH sponsored clinical trial being conducted by the Cardiothoracic Surgery Clinical Trials Network.

  20. The JOINT model of nurse absenteeism and turnover: a systematic review.

    PubMed

    Daouk-Öyry, Lina; Anouze, Abdel-Latef; Otaki, Farah; Dumit, Nuhad Yazbik; Osman, Ibrahim

    2014-01-01

    Absenteeism and turnover among healthcare workers have a significant impact on overall healthcare system performance. The literature captures variables from different levels of measurement and analysis as being associated with attendance behavior among nurses. Yet, it remains unclear how variables from different contextual levels interact to impact nurses' attendance behaviors. The purpose of this review is to develop an integrative multilevel framework that optimizes our understanding of absenteeism and turnover among nurses in hospital settings. We therefore systematically examine English-only studies retrieved from two major databases, PubMed and CINAHL Plus and published between January, 2007 and January, 2013 (inclusive). Our review led to the identification of 7619 articles out of which 41 matched the inclusion criteria. The analysis yielded a total of 91 antecedent variables and 12 outcome variables for turnover, and 29 antecedent variables and 9 outcome variables for absenteeism. The various manifested variables were analyzed using content analysis and grouped into 11 categories, and further into five main factors: Job, Organization, Individual, National and inTerpersonal (JOINT). Thus, we propose the JOINT multilevel conceptual model for investigating absenteeism and turnover among nurses. The JOINT model can be adapted by researchers for fitting their hypothesized multilevel relationships. It can also be used by nursing managers as a lens for holistically managing nurses' attendance behaviors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. The Gaussian atmospheric transport model and its sensitivity to the joint frequency distribution and parametric variability.

    PubMed

    Hamby, D M

    2002-01-01

    Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.

  2. Joint Probability Models of Radiology Images and Clinical Annotations

    ERIC Educational Resources Information Center

    Arnold, Corey Wells

    2009-01-01

    Radiology data, in the form of images and reports, is growing at a high rate due to the introduction of new imaging modalities, new uses of existing modalities, and the growing importance of objective image information in the diagnosis and treatment of patients. This increase has resulted in an enormous set of image data that is richly annotated…

  3. Characterization of 3D joint space morphology using an electrostatic model (with application to osteoarthritis)

    NASA Astrophysics Data System (ADS)

    Cao, Qian; Thawait, Gaurav; Gang, Grace J.; Zbijewski, Wojciech; Reigel, Thomas; Brown, Tyler; Corner, Brian; Demehri, Shadpour; Siewerdsen, Jeffrey H.

    2015-02-01

    Joint space morphology can be indicative of the risk, presence, progression, and/or treatment response of disease or trauma. We describe a novel methodology of characterizing joint space morphology in high-resolution 3D images (e.g. cone-beam CT (CBCT)) using a model based on elementary electrostatics that overcomes a variety of basic limitations of existing 2D and 3D methods. The method models each surface of a joint as a conductor at fixed electrostatic potential and characterizes the intra-articular space in terms of the electric field lines resulting from the solution of Gauss’ Law and the Laplace equation. As a test case, the method was applied to discrimination of healthy and osteoarthritic subjects (N = 39) in 3D images of the knee acquired on an extremity CBCT system. The method demonstrated improved diagnostic performance (area under the receiver operating characteristic curve, AUC > 0.98) compared to simpler methods of quantitative measurement and qualitative image-based assessment by three expert musculoskeletal radiologists (AUC = 0.87, p-value = 0.007). The method is applicable to simple (e.g. the knee or elbow) or multi-axial joints (e.g. the wrist or ankle) and may provide a useful means of quantitatively assessing a variety of joint pathologies.

  4. Probability Theory Plus Noise: Descriptive Estimation and Inferential Judgment.

    PubMed

    Costello, Fintan; Watts, Paul

    2018-01-01

    We describe a computational model of two central aspects of people's probabilistic reasoning: descriptive probability estimation and inferential probability judgment. This model assumes that people's reasoning follows standard frequentist probability theory, but it is subject to random noise. This random noise has a regressive effect in descriptive probability estimation, moving probability estimates away from normative probabilities and toward the center of the probability scale. This random noise has an anti-regressive effect in inferential judgement, however. These regressive and anti-regressive effects explain various reliable and systematic biases seen in people's descriptive probability estimation and inferential probability judgment. This model predicts that these contrary effects will tend to cancel out in tasks that involve both descriptive estimation and inferential judgement, leading to unbiased responses in those tasks. We test this model by applying it to one such task, described by Gallistel et al. ). Participants' median responses in this task were unbiased, agreeing with normative probability theory over the full range of responses. Our model captures the pattern of unbiased responses in this task, while simultaneously explaining systematic biases away from normatively correct probabilities seen in other tasks. Copyright © 2018 Cognitive Science Society, Inc.

  5. Computational modeling to predict mechanical function of joints: application to the lower leg with simulation of two cadaver studies.

    PubMed

    Liacouras, Peter C; Wayne, Jennifer S

    2007-12-01

    Computational models of musculoskeletal joints and limbs can provide useful information about joint mechanics. Validated models can be used as predictive devices for understanding joint function and serve as clinical tools for predicting the outcome of surgical procedures. A new computational modeling approach was developed for simulating joint kinematics that are dictated by bone/joint anatomy, ligamentous constraints, and applied loading. Three-dimensional computational models of the lower leg were created to illustrate the application of this new approach. Model development began with generating three-dimensional surfaces of each bone from CT images and then importing into the three-dimensional solid modeling software SOLIDWORKS and motion simulation package COSMOSMOTION. Through SOLIDWORKS and COSMOSMOTION, each bone surface file was filled to create a solid object and positioned necessary components added, and simulations executed. Three-dimensional contacts were added to inhibit intersection of the bones during motion. Ligaments were represented as linear springs. Model predictions were then validated by comparison to two different cadaver studies, syndesmotic injury and repair and ankle inversion following ligament transection. The syndesmotic injury model was able to predict tibial rotation, fibular rotation, and anterior/posterior displacement. In the inversion simulation, calcaneofibular ligament extension and angles of inversion compared well. Some experimental data proved harder to simulate accurately, due to certain software limitations and lack of complete experimental data. Other parameters that could not be easily obtained experimentally can be predicted and analyzed by the computational simulations. In the syndesmotic injury study, the force generated in the tibionavicular and calcaneofibular ligaments reduced with the insertion of the staple, indicating how this repair technique changes joint function. After transection of the calcaneofibular

  6. Risk estimation using probability machines

    PubMed Central

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  7. Risk estimation using probability machines.

    PubMed

    Dasgupta, Abhijit; Szymczak, Silke; Moore, Jason H; Bailey-Wilson, Joan E; Malley, James D

    2014-03-01

    Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a "risk machine", will share properties from the statistical machine that it is derived from.

  8. Progressive Damage Analysis of Bonded Composite Joints

    NASA Technical Reports Server (NTRS)

    Leone, Frank A., Jr.; Girolamo, Donato; Davila, Carlos G.

    2012-01-01

    The present work is related to the development and application of progressive damage modeling techniques to bonded joint technology. The joint designs studied in this work include a conventional composite splice joint and a NASA-patented durable redundant joint. Both designs involve honeycomb sandwich structures with carbon/epoxy facesheets joined using adhesively bonded doublers.Progressive damage modeling allows for the prediction of the initiation and evolution of damage within a structure. For structures that include multiple material systems, such as the joint designs under consideration, the number of potential failure mechanisms that must be accounted for drastically increases the complexity of the analyses. Potential failure mechanisms include fiber fracture, intraply matrix cracking, delamination, core crushing, adhesive failure, and their interactions. The bonded joints were modeled using highly parametric, explicitly solved finite element models, with damage modeling implemented via custom user-written subroutines. Each ply was discretely meshed using three-dimensional solid elements. Layers of cohesive elements were included between each ply to account for the possibility of delaminations and were used to model the adhesive layers forming the joint. Good correlation with experimental results was achieved both in terms of load-displacement history and the predicted failure mechanism(s).

  9. A neural joint model for entity and relation extraction from biomedical text.

    PubMed

    Li, Fei; Zhang, Meishan; Fu, Guohong; Ji, Donghong

    2017-03-31

    Extracting biomedical entities and their relations from text has important applications on biomedical research. Previous work primarily utilized feature-based pipeline models to process this task. Many efforts need to be made on feature engineering when feature-based models are employed. Moreover, pipeline models may suffer error propagation and are not able to utilize the interactions between subtasks. Therefore, we propose a neural joint model to extract biomedical entities as well as their relations simultaneously, and it can alleviate the problems above. Our model was evaluated on two tasks, i.e., the task of extracting adverse drug events between drug and disease entities, and the task of extracting resident relations between bacteria and location entities. Compared with the state-of-the-art systems in these tasks, our model improved the F1 scores of the first task by 5.1% in entity recognition and 8.0% in relation extraction, and that of the second task by 9.2% in relation extraction. The proposed model achieves competitive performances with less work on feature engineering. We demonstrate that the model based on neural networks is effective for biomedical entity and relation extraction. In addition, parameter sharing is an alternative method for neural models to jointly process this task. Our work can facilitate the research on biomedical text mining.

  10. Blocking probability in the hose-model optical VPN with different number of wavelengths

    NASA Astrophysics Data System (ADS)

    Roslyakov, Alexander V.

    2017-04-01

    Connection setup with guaranteed quality of service (QoS) in the optical virtual private network (OVPN) is a major goal for the network providers. In order to support this we propose a QoS based OVPN connection set up mechanism over WDM network to the end customer. The proposed WDM network model can be specified in terms of QoS parameter such as blocking probability. We estimated this QoS parameter based on the hose-model OVPN. In this mechanism the OVPN connections also can be created or deleted according to the availability of the wavelengths in the optical path. In this paper we have considered the impact of the number of wavelengths on the computation of blocking probability. The goal of the work is to dynamically provide a best OVPN connection during frequent arrival of connection requests with QoS requirements.

  11. Alternative probability theories for cognitive psychology.

    PubMed

    Narens, Louis

    2014-01-01

    Various proposals for generalizing event spaces for probability functions have been put forth in the mathematical, scientific, and philosophic literatures. In cognitive psychology such generalizations are used for explaining puzzling results in decision theory and for modeling the influence of context effects. This commentary discusses proposals for generalizing probability theory to event spaces that are not necessarily boolean algebras. Two prominent examples are quantum probability theory, which is based on the set of closed subspaces of a Hilbert space, and topological probability theory, which is based on the set of open sets of a topology. Both have been applied to a variety of cognitive situations. This commentary focuses on how event space properties can influence probability concepts and impact cognitive modeling. Copyright © 2013 Cognitive Science Society, Inc.

  12. Modelling of spatial contaminant probabilities of occurrence of chlorinated hydrocarbons in an urban aquifer.

    PubMed

    Greis, Tillman; Helmholz, Kathrin; Schöniger, Hans Matthias; Haarstrick, Andreas

    2012-06-01

    In this study, a 3D urban groundwater model is presented which serves for calculation of multispecies contaminant transport in the subsurface on the regional scale. The total model consists of two submodels, the groundwater flow and reactive transport model, and is validated against field data. The model equations are solved applying finite element method. A sensitivity analysis is carried out to perform parameter identification of flow, transport and reaction processes. Coming from the latter, stochastic variation of flow, transport, and reaction input parameters and Monte Carlo simulation are used in calculating probabilities of pollutant occurrence in the domain. These probabilities could be part of determining future spots of contamination and their measure of damages. Application and validation is exemplarily shown for a contaminated site in Braunschweig (Germany), where a vast plume of chlorinated ethenes pollutes the groundwater. With respect to field application, the methods used for modelling reveal feasible and helpful tools to assess natural attenuation (MNA) and the risk that might be reduced by remediation actions.

  13. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  14. Sulforaphane Modulates Joint Inflammation in a Murine Model of Complete Freund's Adjuvant-Induced Mono-Arthritis.

    PubMed

    Silva Rodrigues, João Francisco; Silva E Silva, Cristiane; França Muniz, Thayanne; de Aquino, Alana Fernanda; Neuza da Silva Nina, Larissa; Fialho Sousa, Nagila Caroline; Nascimento da Silva, Luis Claudio; de Souza, Breno Glaessner Gomes Fernandes; da Penha, Tatiana Aranha; Abreu-Silva, Ana Lúcia; de Sá, Joicy Cortez; Soares Fernandes, Elizabeth; Grisotto, Marcos Augusto Grigolin

    2018-04-24

    Rheumatoid arthritis (RA) is characterized by inflammation of one or more joints, and affects ~1% of the adult population worldwide. Sulforaphane (SFN) is a natural compound that has been suggested as an antioxidant. Here, SFN’s effects were evaluated in a murine mono-arthritis model. Mono-arthritis was induced in mice by a single intra-articular injection of Complete Freund’s Adjuvant (CFA-10 µg/joint, in 10 µL) into the ipsilateral joint. The contralateral joint received an equal volume of PBS. On the 4th day post-joint inflammation induction, animals received either SFN (10 mg/kg) or vehicle (3% DMSO in saline), intraperitoneally (i.p.), twice a day for 3 days. Joint swelling and secondary mechanical allodynia and hyperalgesia were evaluated over 7 days post-CFA. After this period, animals were culled and their blood and synovial fluid samples were collected for analysis of cell populations, cytokine release and thioredoxin reductase (TrxR) activity. Knee joint samples were also collected for histology. SFN reduced joint swelling and damage whilst increasing the recruitment of Ly6C⁺ and Ly6G⁺ cells to CFA-injected joints. SFN-treated animals presented down-regulation of CD11b and CD62L on synovial fluid Ly6G⁺ cells. Synovial fluid samples obtained from CFA-injected joints and plasma samples of SFN-treated mice presented higher levels of IL-6 and increased activity of TrxR, in comparison with controls. These results indicate that SFN reduces knee joint damage by modulating cell activation/migration to the joints, cytokine production and increasing the activity of TrxR, and therefore, may represent an alternative treatment to joint inflammation.

  15. Low Activation Joining of SiC/SiC Composites for Fusion Applications: Thermomechanical Modeling of Dual-Phase Microstructures and Dissimilar Material Joints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Henager, Charles H.; Kurtz, Richard J.

    2016-09-30

    Finite element (FE) continuum damage mechanics (CDM) models have been developed to simulate and model dual-phase joints and cracked joints for improved analysis of SiC materials in nuclear environments. This report extends the analysis from the last reporting cycle by including preliminary thermomechanical analyses of cracked joints and implementation of dual-phase damage models.

  16. Transition probability spaces in loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Guo, Xiao-Kan

    2018-03-01

    We study the (generalized) transition probability spaces, in the sense of Mielnik and Cantoni, for spacetime quantum states in loop quantum gravity. First, we show that loop quantum gravity admits the structures of transition probability spaces. This is exemplified by first checking such structures in covariant quantum mechanics and then identifying the transition probability spaces in spin foam models via a simplified version of general boundary formulation. The transition probability space thus defined gives a simple way to reconstruct the discrete analog of the Hilbert space of the canonical theory and the relevant quantum logical structures. Second, we show that the transition probability space and in particular the spin foam model are 2-categories. Then we discuss how to realize in spin foam models two proposals by Crane about the mathematical structures of quantum gravity, namely, the quantum topos and causal sites. We conclude that transition probability spaces provide us with an alternative framework to understand various foundational questions of loop quantum gravity.

  17. TaggerOne: joint named entity recognition and normalization with semi-Markov Models

    PubMed Central

    Leaman, Robert; Lu, Zhiyong

    2016-01-01

    Motivation: Text mining is increasingly used to manage the accelerating pace of the biomedical literature. Many text mining applications depend on accurate named entity recognition (NER) and normalization (grounding). While high performing machine learning methods trainable for many entity types exist for NER, normalization methods are usually specialized to a single entity type. NER and normalization systems are also typically used in a serial pipeline, causing cascading errors and limiting the ability of the NER system to directly exploit the lexical information provided by the normalization. Methods: We propose the first machine learning model for joint NER and normalization during both training and prediction. The model is trainable for arbitrary entity types and consists of a semi-Markov structured linear classifier, with a rich feature approach for NER and supervised semantic indexing for normalization. We also introduce TaggerOne, a Java implementation of our model as a general toolkit for joint NER and normalization. TaggerOne is not specific to any entity type, requiring only annotated training data and a corresponding lexicon, and has been optimized for high throughput. Results: We validated TaggerOne with multiple gold-standard corpora containing both mention- and concept-level annotations. Benchmarking results show that TaggerOne achieves high performance on diseases (NCBI Disease corpus, NER f-score: 0.829, normalization f-score: 0.807) and chemicals (BioCreative 5 CDR corpus, NER f-score: 0.914, normalization f-score 0.895). These results compare favorably to the previous state of the art, notwithstanding the greater flexibility of the model. We conclude that jointly modeling NER and normalization greatly improves performance. Availability and Implementation: The TaggerOne source code and an online demonstration are available at: http://www.ncbi.nlm.nih.gov/bionlp/taggerone Contact: zhiyong.lu@nih.gov Supplementary information: Supplementary data are

  18. A spatial model of bird abundance as adjusted for detection probability

    USGS Publications Warehouse

    Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.

    2009-01-01

    Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.

  19. Influence of manual therapy on functional mobility after joint injury in a rat model.

    PubMed

    Ruhlen, Rachel L; Snider, Eric J; Sargentini, Neil J; Worthington, Bart D; Singh, Vineet K; Pazdernik, Vanessa K; Johnson, Jane C; Degenhardt, Brian F

    2013-10-01

    Animal models can be used to investigate manual therapy mechanisms, but testing manipulation in animal models is problematic because animals cannot directly report their pain. To develop a rat model of inflammatory joint injury to test the efficacy of manual therapy in reducing nociception and restoring function. The authors induced acute inflammatory joint injury in rats by injecting carrageenan into the ankle and then measured voluntary running wheel activity in treated and untreated rats. Treatments included manual therapy applied to the ankle and knee of the injured limb and several analgesic medications (eg, morphine, ketorolac, prednisone). Intra-articular injection of carrageenan to the ankle produced significant swelling (diameter of the ankle increased by 64% after injection; P=.004) and a robust reduction in voluntary running wheel activity (running distance reduced by 91% compared with controls; P<.001). Injured rats gradually returned to running levels equal to controls over 10 days. Neither manual therapy nor analgesic medications increased running wheel activity relative to untreated rats. Voluntary running wheel activity appears to be an appropriate functional measure to evaluate the impact of an acute inflammatory joint injury. However, efforts to treat the injury did not restore running relative to untreated rats.

  20. Identifying Changes in the Probability of High Temperature, High Humidity Heat Wave Events

    NASA Astrophysics Data System (ADS)

    Ballard, T.; Diffenbaugh, N. S.

    2016-12-01

    Understanding how heat waves will respond to climate change is critical for adequate planning and adaptation. While temperature is the primary determinant of heat wave severity, humidity has been shown to play a key role in heat wave intensity with direct links to human health and safety. Here we investigate the individual contributions of temperature and specific humidity to extreme heat wave conditions in recent decades. Using global NCEP-DOE Reanalysis II daily data, we identify regional variability in the joint probability distribution of humidity and temperature. We also identify a statistically significant positive trend in humidity over the eastern U.S. during heat wave events, leading to an increased probability of high humidity, high temperature events. The extent to which we can expect this trend to continue under climate change is complicated due to variability between CMIP5 models, in particular among projections of humidity. However, our results support the notion that heat wave dynamics are characterized by more than high temperatures alone, and understanding and quantifying the various components of the heat wave system is crucial for forecasting future impacts.

  1. Modeling the Influence of Stitching on Delamination Growth in Stitched Warp-Knit Composite Lap Joints

    NASA Technical Reports Server (NTRS)

    Glaessgen, E. H.; Raju, I. S.; Poe, C. C., Jr.

    1999-01-01

    The effect of stitches on the failure of a single lap joint configuration was determined in a combined experimental and analytical study. The experimental study was conducted to determine debond growth under static monotonic loading. The stitches were shown to delay the initiation of the debond and provide load transfer beyond the load necessary to completely debond the stitched lap joint. The strain energy release rates at the debond front were calculated using a finite element-based technique. Models of the unstitched configuration showed significant values of modes I and II across the width of the joint and showed that mode III is zero at the centerline but increases near the free edge. Models of the stitched configuration showed that the stitches effectively reduced mode I to zero, but had less of an effect on modes II and III.

  2. Development of a hip joint model for finite volume simulations.

    PubMed

    Cardiff, P; Karač, A; FitzPatrick, D; Ivanković, A

    2014-01-01

    This paper establishes a procedure for numerical analysis of a hip joint using the finite volume method. Patient-specific hip joint geometry is segmented directly from computed tomography and magnetic resonance imaging datasets and the resulting bone surfaces are processed into a form suitable for volume meshing. A high resolution continuum tetrahedral mesh has been generated, where a sandwich model approach is adopted; the bones are represented as a stiffer cortical shells surrounding more flexible cancellous cores. Cartilage is included as a uniform thickness extruded layer and the effect of layer thickness is investigated. To realistically position the bones, gait analysis has been performed giving the 3D positions of the bones for the full gait cycle. Three phases of the gait cycle are examined using a finite volume based custom structural contact solver implemented in open-source software OpenFOAM.

  3. Diagnostics for Confounding of Time-varying and Other Joint Exposures.

    PubMed

    Jackson, John W

    2016-11-01

    The effects of joint exposures (or exposure regimes) include those of adhering to assigned treatment versus placebo in a randomized controlled trial, duration of exposure in a cohort study, interactions between exposures, and direct effects of exposure, among others. Unlike the setting of a single point exposure (e.g., propensity score matching), there are few tools to describe confounding for joint exposures or how well a method resolves it. Investigators need tools that describe confounding in ways that are conceptually grounded and intuitive for those who read, review, and use applied research to guide policy. We revisit the implications of exchangeability conditions that hold in sequentially randomized trials, and the bias structure that motivates the use of g-methods, such as marginal structural models. From these, we develop covariate balance diagnostics for joint exposures that can (1) describe time-varying confounding, (2) assess whether covariates are predicted by prior exposures given their past, the indication for g-methods, and (3) describe residual confounding after inverse probability weighting. For each diagnostic, we present time-specific metrics that encompass a wide class of joint exposures, including regimes of multivariate time-varying exposures in censored data, with multivariate point exposures as a special case. We outline how to estimate these directly or with regression and how to average them over person-time. Using a simulated example, we show how these metrics can be presented graphically. This conceptually grounded framework can potentially aid the transparent design, analysis, and reporting of studies that examine joint exposures. We provide easy-to-use tools to implement it.

  4. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  5. A review of typical thermal fatigue failure models for solder joints of electronic components

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyan; Sun, Ruifeng; Wang, Yongdong

    2017-09-01

    For electronic components, cyclic plastic strain makes it easier to accumulate fatigue damage than elastic strain. When the solder joints undertake thermal expansion or cold contraction, different thermal strain of the electronic component and its corresponding substrate is caused by the different coefficient of thermal expansion of the electronic component and its corresponding substrate, leading to the phenomenon of stress concentration. So repeatedly, cracks began to sprout and gradually extend [1]. In this paper, the typical thermal fatigue failure models of solder joints of electronic components are classified and the methods of obtaining the parameters in the model are summarized based on domestic and foreign literature research.

  6. Soft tissue artifact compensation in knee kinematics by multi-body optimization: Performance of subject-specific knee joint models.

    PubMed

    Clément, Julien; Dumas, Raphaël; Hagemeister, Nicola; de Guise, Jaques A

    2015-11-05

    Soft tissue artifact (STA) distort marker-based knee kinematics measures and make them difficult to use in clinical practice. None of the current methods designed to compensate for STA is suitable, but multi-body optimization (MBO) has demonstrated encouraging results and can be improved. The goal of this study was to develop and validate the performance of knee joint models, with anatomical and subject-specific kinematic constraints, used in MBO to reduce STA errors. Twenty subjects were recruited: 10 healthy and 10 osteoarthritis (OA) subjects. Subject-specific knee joint models were evaluated by comparing dynamic knee kinematics recorded by a motion capture system (KneeKG™) and optimized with MBO to quasi-static knee kinematics measured by a low-dose, upright, biplanar radiographic imaging system (EOS(®)). Errors due to STA ranged from 1.6° to 22.4° for knee rotations and from 0.8 mm to 14.9 mm for knee displacements in healthy and OA subjects. Subject-specific knee joint models were most effective in compensating for STA in terms of abduction-adduction, inter-external rotation and antero-posterior displacement. Root mean square errors with subject-specific knee joint models ranged from 2.2±1.2° to 6.0±3.9° for knee rotations and from 2.4±1.1 mm to 4.3±2.4 mm for knee displacements in healthy and OA subjects, respectively. Our study shows that MBO can be improved with subject-specific knee joint models, and that the quality of the motion capture calibration is critical. Future investigations should focus on more refined knee joint models to reproduce specific OA knee geometry and physiology. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Non-Kolmogorovian Approach to the Context-Dependent Systems Breaking the Classical Probability Law

    NASA Astrophysics Data System (ADS)

    Asano, Masanari; Basieva, Irina; Khrennikov, Andrei; Ohya, Masanori; Yamato, Ichiro

    2013-07-01

    There exist several phenomena breaking the classical probability laws. The systems related to such phenomena are context-dependent, so that they are adaptive to other systems. In this paper, we present a new mathematical formalism to compute the joint probability distribution for two event-systems by using concepts of the adaptive dynamics and quantum information theory, e.g., quantum channels and liftings. In physics the basic example of the context-dependent phenomena is the famous double-slit experiment. Recently similar examples have been found in biological and psychological sciences. Our approach is an extension of traditional quantum probability theory, and it is general enough to describe aforementioned contextual phenomena outside of quantum physics.

  8. Skill of Ensemble Seasonal Probability Forecasts

    NASA Astrophysics Data System (ADS)

    Smith, Leonard A.; Binter, Roman; Du, Hailiang; Niehoerster, Falk

    2010-05-01

    In operational forecasting, the computational complexity of large simulation models is, ideally, justified by enhanced performance over simpler models. We will consider probability forecasts and contrast the skill of ENSEMBLES-based seasonal probability forecasts of interest to the finance sector (specifically temperature forecasts for Nino 3.4 and the Atlantic Main Development Region (MDR)). The ENSEMBLES model simulations will be contrasted against forecasts from statistical models based on the observations (climatological distributions) and empirical dynamics based on the observations but conditioned on the current state (dynamical climatology). For some start dates, individual ENSEMBLES models yield significant skill even at a lead-time of 14 months. The nature of this skill is discussed, and chances of application are noted. Questions surrounding the interpretation of probability forecasts based on these multi-model ensemble simulations are then considered; the distributions considered are formed by kernel dressing the ensemble and blending with the climatology. The sources of apparent (RMS) skill in distributions based on multi-model simulations is discussed, and it is demonstrated that the inclusion of "zero-skill" models in the long range can improve Root-Mean-Square-Error scores, casting some doubt on the common justification for the claim that all models should be included in forming an operational probability forecast. It is argued that the rational response varies with lead time.

  9. Modeling Anisotropic Elastic Wave Propagation in Jointed Rock Masses

    NASA Astrophysics Data System (ADS)

    Hurley, R.; Vorobiev, O.; Ezzedine, S. M.; Antoun, T.

    2016-12-01

    We present a numerical approach for determining the anisotropic stiffness of materials with nonlinearly-compliant joints capable of sliding. The proposed method extends existing ones for upscaling the behavior of a medium with open cracks and inclusions to cases relevant to natural fractured and jointed rocks, where nonlinearly-compliant joints can undergo plastic slip. The method deviates from existing techniques by incorporating the friction and closure states of the joints, and recovers an anisotropic elastic form in the small-strain limit when joints are not sliding. We present the mathematical formulation of our method and use Representative Volume Element (RVE) simulations to evaluate its accuracy for joint sets with varying complexity. We then apply the formulation to determine anisotropic elastic constants of jointed granite found at the Nevada Nuclear Security Site (NNSS) where the Source Physics Experiments (SPE), a campaign of underground chemical explosions, are performed. Finally, we discuss the implementation of our numerical approach in a massively parallel Lagrangian code Geodyn-L and its use for studying wave propagation from underground explosions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  10. Kartogenin treatment prevented joint degeneration in a rodent model of osteoarthritis: A pilot study.

    PubMed

    Mohan, Geetha; Magnitsky, Sergey; Melkus, Gerd; Subburaj, Karupppasamy; Kazakia, Galateia; Burghardt, Andrew J; Dang, Alexis; Lane, Nancy E; Majumdar, Sharmila

    2016-10-01

    Osteoarthritis (OA) is a major degenerative joint disease characterized by progressive loss of articular cartilage, synovitis, subchondral bone changes, and osteophyte formation. Currently there is no treatment for OA except temporary pain relief and end-stage joint replacement surgery. We performed a pilot study to determine the effect of kartogenin (KGN, a small molecule) on both cartilage and subchondral bone in a rat model of OA using multimodal imaging techniques. OA was induced in rats (OA and KGN treatment group) by anterior cruciate ligament transection (ACLT) surgery in the right knee joint. Sham surgery was performed on the right knee joint of control group rats. KGN group rats received weekly intra-articular injection of 125 μM KGN 1 week after surgery until week 12. All rats underwent in vivo magnetic resonance imaging (MRI) at 3, 6, and 12 weeks after surgery. Quantitative MR relaxation measures (T 1ρ and T 2 ) were determined to evaluate changes in articular cartilage. Cartilage and bone turnover markers (COMP and CTX-I) were determined at baseline, 3, 6, and 12 weeks. Animals were sacrificed at week 12 and the knee joints were removed for micro-computed tomography (micro-CT) and histology. KGN treatment significantly lowered the T 1ρ and T 2 relaxation times indicating decreased cartilage degradation. KGN treatment significantly decreased COMP and CTX-I levels indicating decreased cartilage and bone turnover rate. KGN treatment also prevented subchondral bone changes in the ACLT rat model of OA. Thus, kartogenin is a potential drug to prevent joint deterioration in post-traumatic OA. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 34:1780-1789, 2016. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  11. A Model for a Single Unmanned Aircraft Systems (UAS) Program Office Managing Joint ISR Capabilities

    DTIC Science & Technology

    2017-10-01

    reduction in manning from the multiple program office structure to the new single program management model. Additional information regarding this...OFFICE MANAGING JOINT ISR CAPABILITIES by Angela E. Burris A Research Report Submitted to the Faculty In Partial Fulfillment of...research paper is to answer how a single management office could provide greater agility for unmanned aircraft systems (UAS); supporting Joint concepts

  12. Blind Students' Learning of Probability through the Use of a Tactile Model

    ERIC Educational Resources Information Center

    Vita, Aida Carvalho; Kataoka, Verônica Yumi

    2014-01-01

    The objective of this paper is to discuss how blind students learn basic concepts of probability using the tactile model proposed by Vita (2012). Among the activities were part of the teaching sequence "Jefferson's Random Walk", in which students built a tree diagram (using plastic trays, foam cards, and toys), and pictograms in 3D…

  13. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    PubMed

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the

  14. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions

    PubMed Central

    Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas

    2015-01-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the

  15. [Biometric bases: basic concepts of probability calculation].

    PubMed

    Dinya, E

    1998-04-26

    The author gives or outline of the basic concepts of probability theory. The bases of the event algebra, definition of the probability, the classical probability model and the random variable are presented.

  16. Application of a semi-automatic cartilage segmentation method for biomechanical modeling of the knee joint.

    PubMed

    Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K

    2017-10-01

    Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.

  17. ONR Tidal Flats DRI: Planning Joint Modeling and Field Exercises

    DTIC Science & Technology

    2007-01-01

    ONR Tidal Flats DRI: Planning Joint Modeling and Field Exercises Lyle Hibler Battelle/Marine Research Operations 1529 West Sequim Bay Road...West Sequim Bay Road Sequim , WA 98382 Phone: (360) 681-4591 Fax: (360) 681-4598 Email: adam.maxwell@pnl.gov Award Number: N000140710694...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Battelle/Marine Research Operations,1529 West Sequim Bay Road, Sequim ,WA,98382 8. PERFORMING ORGANIZATION

  18. Joint Bayesian Estimation of Quasar Continua and the Lyα Forest Flux Probability Distribution Function

    NASA Astrophysics Data System (ADS)

    Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan

    2017-08-01

    We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.

  19. The World According to de Finetti: On de Finetti's Theory of Probability and Its Application to Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Berkovitz, Joseph

    Bruno de Finetti is one of the founding fathers of the subjectivist school of probability, where probabilities are interpreted as rational degrees of belief. His work on the relation between the theorems of probability and rationality is among the corner stones of modern subjective probability theory. De Finetti maintained that rationality requires that degrees of belief be coherent, and he argued that the whole of probability theory could be derived from these coherence conditions. De Finetti's interpretation of probability has been highly influential in science. This paper focuses on the application of this interpretation to quantum mechanics. We argue that de Finetti held that the coherence conditions of degrees of belief in events depend on their verifiability. Accordingly, the standard coherence conditions of degrees of belief that are familiar from the literature on subjective probability only apply to degrees of belief in events which could (in principle) be jointly verified; and the coherence conditions of degrees of belief in events that cannot be jointly verified are weaker. While the most obvious explanation of de Finetti's verificationism is the influence of positivism, we argue that it could be motivated by the radical subjectivist and instrumental nature of probability in his interpretation; for as it turns out, in this interpretation it is difficult to make sense of the idea of coherent degrees of belief in, and accordingly probabilities of unverifiable events. We then consider the application of this interpretation to quantum mechanics, concentrating on the Einstein-Podolsky-Rosen experiment and Bell's theorem.

  20. Modelling the regional variability of the probability of high trihalomethane occurrence in municipal drinking water.

    PubMed

    Cool, Geneviève; Lebel, Alexandre; Sadiq, Rehan; Rodriguez, Manuel J

    2015-12-01

    The regional variability of the probability of occurrence of high total trihalomethane (TTHM) levels was assessed using multilevel logistic regression models that incorporate environmental and infrastructure characteristics. The models were structured in a three-level hierarchical configuration: samples (first level), drinking water utilities (DWUs, second level) and natural regions, an ecological hierarchical division from the Quebec ecological framework of reference (third level). They considered six independent variables: precipitation, temperature, source type, seasons, treatment type and pH. The average probability of TTHM concentrations exceeding the targeted threshold was 18.1%. The probability was influenced by seasons, treatment type, precipitations and temperature. The variance at all levels was significant, showing that the probability of TTHM concentrations exceeding the threshold is most likely to be similar if located within the same DWU and within the same natural region. However, most of the variance initially attributed to natural regions was explained by treatment types and clarified by spatial aggregation on treatment types. Nevertheless, even after controlling for treatment type, there was still significant regional variability of the probability of TTHM concentrations exceeding the threshold. Regional variability was particularly important for DWUs using chlorination alone since they lack the appropriate treatment required to reduce the amount of natural organic matter (NOM) in source water prior to disinfection. Results presented herein could be of interest to authorities in identifying regions with specific needs regarding drinking water quality and for epidemiological studies identifying geographical variations in population exposure to disinfection by-products (DBPs).

  1. Joint models for longitudinal and time-to-event data: a review of reporting quality with a view to meta-analysis.

    PubMed

    Sudell, Maria; Kolamunnage-Dona, Ruwanthi; Tudur-Smith, Catrin

    2016-12-05

    Joint models for longitudinal and time-to-event data are commonly used to simultaneously analyse correlated data in single study cases. Synthesis of evidence from multiple studies using meta-analysis is a natural next step but its feasibility depends heavily on the standard of reporting of joint models in the medical literature. During this review we aim to assess the current standard of reporting of joint models applied in the literature, and to determine whether current reporting standards would allow or hinder future aggregate data meta-analyses of model results. We undertook a literature review of non-methodological studies that involved joint modelling of longitudinal and time-to-event medical data. Study characteristics were extracted and an assessment of whether separate meta-analyses for longitudinal, time-to-event and association parameters were possible was made. The 65 studies identified used a wide range of joint modelling methods in a selection of software. Identified studies concerned a variety of disease areas. The majority of studies reported adequate information to conduct a meta-analysis (67.7% for longitudinal parameter aggregate data meta-analysis, 69.2% for time-to-event parameter aggregate data meta-analysis, 76.9% for association parameter aggregate data meta-analysis). In some cases model structure was difficult to ascertain from the published reports. Whilst extraction of sufficient information to permit meta-analyses was possible in a majority of cases, the standard of reporting of joint models should be maintained and improved. Recommendations for future practice include clear statement of model structure, of values of estimated parameters, of software used and of statistical methods applied.

  2. Finite element modelling of the articular disc behaviour of the temporo-mandibular joint under dynamic loads.

    PubMed

    Jaisson, Maxime; Lestriez, Philippe; Taiar, Redha; Debray, Karl

    2011-01-01

    The proposed biodynamic model of the articular disc joint has the ability to affect directly the complete chewing mechanism process and its related muscles defining its kinematics. When subjected to stresses from the mastication muscles, the disc absorbs one part and redistributes the other to become completely distorted. To develop a realistic model of this intricate joint a CT scan and MRI images from a patient were obtained to create sections (layers) and MRI images to create an anatomical joint CAD model, and its corresponding mesh element using a finite element method. The boundary conditions are described by the external forces applied to the joint model through a decomposition of the maximum muscular force developed by the same individual. In this study, the maximum force was operating at frequencies close to the actual chewing frequency measured through a cyclic loading condition. The reaction force at the glenoid fossa was found to be around 1035 N and is directly related to the frequency of indentation. It is also shown that over the years the areas of maximum stresses are located at the lateral portion of the disc and on its posterior rim. These forces can reach 13.2 MPa after a period of 32 seconds (s) at a frequency of 0.5 Hz. An important part of this study is to highlight resilience and the areas where stresses are at their maximum. This study provides a novel approach to improve the understanding of this complex joint, as well as to assess the different pathologies associated with the disc disease that would be difficult to study otherwise.

  3. A joint modelling approach for multistate processes subject to resolution and under intermittent observations.

    PubMed

    Yiu, Sean; Tom, Brian

    2017-02-10

    Multistate processes provide a convenient framework when interest lies in characterising the transition intensities between a set of defined states. If, however, there is an unobserved event of interest (not known if and when the event occurs), which when it occurs stops future transitions in the multistate process from occurring, then drawing inference from the joint multistate and event process can be problematic. In health studies, a particular example of this could be resolution, where a resolved patient can no longer experience any further symptoms, and this is explored here for illustration. A multistate model that includes the state space of the original multistate process but partitions the state representing absent symptoms into a latent absorbing resolved state and a temporary transient state of absent symptoms is proposed. The expanded state space explicitly distinguishes between resolved and temporary spells of absent symptoms through disjoint states and allows the uncertainty of not knowing if resolution has occurred to be easily captured when constructing the likelihood; observations of absent symptoms can be considered to be temporary or having resulted from resolution. The proposed methodology is illustrated on a psoriatic arthritis data set where the outcome of interest is a set of intermittently observed disability scores. Estimated probabilities of resolving are also obtained from the model. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  4. Homogenization of Electromagnetic and Seismic Wavefields for Joint Inverse Modeling

    NASA Astrophysics Data System (ADS)

    Newman, G. A.; Commer, M.; Petrov, P.; Um, E. S.

    2011-12-01

    A significant obstacle in developing a robust joint imaging technology exploiting seismic and electromagnetic (EM) wave fields is the resolution at which these different geophysical measurements sense the subsurface. Imaging of seismic reflection data is an order of magnitude finer in resolution and scale compared to images produced with EM data. A consistent joint image of the subsurface geophysical attributes (velocity, electrical conductivity) requires/demands the different geophysical data types be similar in their resolution of the subsurface. The superior resolution of seismic data results from the fact that the energy propagates as a wave, while propagation of EM energy is diffusive and attenuates with distance. On the other hand, the complexity of the seismic wave field can be a significant problem due to high reflectivity of the subsurface and the generation of multiple scattering events. While seismic wave fields have been very useful in mapping the subsurface for energy resources, too much scattering and too many reflections can lead to difficulties in imaging and interpreting seismic data. To overcome these obstacles a formulation for joint imaging of seismic and EM wave fields is introduced, where each data type is matched in resolution. In order to accomplish this, seismic data are first transformed into the Laplace-Fourier Domain, which changes the modeling of the seismic wave field from wave propagation to diffusion. Though high frequency information (reflectivity) is lost with this transformation, several benefits follow: (1) seismic and EM data can be easily matched in resolution, governed by the same physics of diffusion, (2) standard least squares inversion works well with diffusive type problems including both transformed seismic and EM, (3) joint imaging of seismic and EM data may produce better starting velocity models critical for successful reverse time migration or full waveform imaging of seismic data (non transformed) and (4

  5. Time series modeling of pathogen-specific disease probabilities with subsampled data.

    PubMed

    Fisher, Leigh; Wakefield, Jon; Bauer, Cici; Self, Steve

    2017-03-01

    Many diseases arise due to exposure to one of multiple possible pathogens. We consider the situation in which disease counts are available over time from a study region, along with a measure of clinical disease severity, for example, mild or severe. In addition, we suppose a subset of the cases are lab tested in order to determine the pathogen responsible for disease. In such a context, we focus interest on modeling the probabilities of disease incidence given pathogen type. The time course of these probabilities is of great interest as is the association with time-varying covariates such as meteorological variables. In this set up, a natural Bayesian approach would be based on imputation of the unsampled pathogen information using Markov Chain Monte Carlo but this is computationally challenging. We describe a practical approach to inference that is easy to implement. We use an empirical Bayes procedure in a first step to estimate summary statistics. We then treat these summary statistics as the observed data and develop a Bayesian generalized additive model. We analyze data on hand, foot, and mouth disease (HFMD) in China in which there are two pathogens of primary interest, enterovirus 71 (EV71) and Coxackie A16 (CA16). We find that both EV71 and CA16 are associated with temperature, relative humidity, and wind speed, with reasonably similar functional forms for both pathogens. The important issue of confounding by time is modeled using a penalized B-spline model with a random effects representation. The level of smoothing is addressed by a careful choice of the prior on the tuning variance. © 2016, The International Biometric Society.

  6. Prediction of hip joint load and translation using musculoskeletal modelling with force-dependent kinematics and experimental validation.

    PubMed

    Zhang, Xuan; Chen, Zhenxian; Wang, Ling; Yang, Wenjian; Li, Dichen; Jin, Zhongmin

    2015-07-01

    Musculoskeletal lower limb models are widely used to predict the resultant contact force in the hip joint as a non-invasive alternative to instrumented implants. Previous musculoskeletal models based on rigid body assumptions treated the hip joint as an ideal sphere with only three rotational degrees of freedom. An musculoskeletal model that considered force-dependent kinematics with three additional translational degrees of freedom was developed and validated in this study by comparing it with a previous experimental measurement. A 32-mm femoral head against a polyethylene cup was considered in the musculoskeletal model for calculating the contact forces. The changes in the main modelling parameters were found to have little influence on the hip joint forces (relative deviation of peak value < 10 BW%, mean trial deviation < 20 BW%). The centre of the hip joint translation was more sensitive to the changes in the main modelling parameters, especially muscle recruitment type (relative deviation of peak value < 20%, mean trial deviation < 0.02 mm). The predicted hip contact forces showed consistent profiles, compared with the experimental measurements, except in the lateral-medial direction. The ratio-average analysis, based on the Bland-Altman's plots, showed better limits of agreement in climbing stairs (mean limits of agreement: -2.0 to 6.3 in walking, mean limits of agreement: -0.5 to 3.1 in climbing stairs). Better agreement of the predicted hip contact forces was also found during the stance phase. The force-dependent kinematics approach underestimated the maximum hip contact force by a mean value of 6.68 ± 1.75% BW compared with the experimental measurements. The predicted maximum translations of the hip joint centres were 0.125 ± 0.03 mm in level walking and 0.123 ± 0.005 mm in climbing stairs. © IMechE 2015.

  7. Development of Mathematic Model of Cold Welding at Drawing-up the Flange Joint of Pneumohydraulic Systems

    NASA Astrophysics Data System (ADS)

    Boyko, Y. S.

    2002-01-01

    Provision of high airtightness of joints of pipe- lines of pneumohydraulic systems (PHS) operating under high pressure, is an important task for designing and operation of launch vehicles. In the process of assembly and tests of PHS of launch vehicles, it was found that detachable flange joints do not lose their airtightness after removal of fastening elements, even in conditions of standard loads. The task of this work is in studying a phenomenon connected with initiation of the observed effect of adhesion and also stresses in the zone of contact at drawing- up the flange detachable joints with a plastic gasket. Investigations have shown that density of the joint is kept due to cold welding, as the created conditions are helpful for that process. As a result of the investigations performed, we have developed a mathematic model which is based on application of the theory of metal bonds; that theory explains the essence of the effect observed. Basic factors which provide optimum mode of cold welding, are effort which can cause microplastic deformation and form maximum contact, and also quality of processing the material of the surfaces joined. Strength of all- metal joint depends on factual area of contact. So, surface processing quality defines a configuration of microbulges which come into contact not simultaneously, and their stressed state is different, and it influences the character of dependence of the contact area on loading. Results of calculations by the mathematic model are expressed by dependencies of factual area of contact and a single diameter of the contact spot on the load applied which compresses the materials with various physical properties, and on the surface processing quality. The mathematic model allows to explain the common character of the cold welding process in detachable flange joints with the plastic gasket, to determine the nature and the character of acting forces, to define kinetics and the mechanism of formation of cold welding of

  8. Inverse-Probability-Weighted Estimation for Monotone and Nonmonotone Missing Data

    PubMed Central

    Sun, BaoLuo; Perkins, Neil J; Cole, Stephen R; Harel, Ofer; Mitchell, Emily M; Schisterman, Enrique F; Tchetgen Tchetgen, Eric J

    2018-01-01

    Abstract Missing data is a common occurrence in epidemiologic research. In this paper, 3 data sets with induced missing values from the Collaborative Perinatal Project, a multisite US study conducted from 1959 to 1974, are provided as examples of prototypical epidemiologic studies with missing data. Our goal was to estimate the association of maternal smoking behavior with spontaneous abortion while adjusting for numerous confounders. At the same time, we did not necessarily wish to evaluate the joint distribution among potentially unobserved covariates, which is seldom the subject of substantive scientific interest. The inverse probability weighting (IPW) approach preserves the semiparametric structure of the underlying model of substantive interest and clearly separates the model of substantive interest from the model used to account for the missing data. However, IPW often will not result in valid inference if the missing-data pattern is nonmonotone, even if the data are missing at random. We describe a recently proposed approach to modeling nonmonotone missing-data mechanisms under missingness at random to use in constructing the weights in IPW complete-case estimation, and we illustrate the approach using 3 data sets described in a companion article (Am J Epidemiol. 2018;187(3):568–575). PMID:29165557

  9. Outcomes of a Joint Replacement Surgical Home Model Clinical Pathway

    PubMed Central

    Chaurasia, Avinash; Garson, Leslie; Kain, Zeev L.; Schwarzkopf, Ran

    2014-01-01

    Optimizing perioperative care to provide maximum benefit at minimum cost may be best achieved using a perioperative clinical pathway (PCP). Using our joint replacement surgical home (JSH) model PCP, we examined length of stay (LOS) following total joint arthroplasty (TJA) to evaluate patient care optimization. We reviewed a spectrum of clinical measurements in 190 consecutive patients who underwent TJA. Patients who had surgery earlier in the week and who were earlier cases of the day had a significantly lower LOS than patients whose cases started both later in the week and later in the day. Patients discharged home had significantly lower LOS than those discharged to a secondary care facility. Patients who received regional versus general anesthesia had a significantly lower LOS. Scheduling patients discharged to home and who will likely receive regional anesthesia for the earliest morning slot and earlier in the week may help decrease overall LOS. PMID:25025045

  10. Models for selecting GMA Welding Parameters for Improving Mechanical Properties of Weld Joints

    NASA Astrophysics Data System (ADS)

    Srinivasa Rao, P.; Ramachandran, Pragash; Jebaraj, S.

    2016-02-01

    During the process of Gas Metal Arc (GMAW) welding, the weld joints mechanical properties are influenced by the welding parameters such as welding current and arc voltage. These parameters directly will influence the quality of the weld in terms of mechanical properties. Even small variation in any of the cited parameters may have an important effect on depth of penetration and on joint strength. In this study, S45C Constructional Steel is taken as the base metal to be tested using the parameters wire feed rate, voltage and type of shielding gas. Physical properties considered in the present study are tensile strength and hardness. The testing of weld specimen is carried out as per ASTM Standards. Mathematical models to predict the tensile strength and depth of penetration of weld joint have been developed by regression analysis using the experimental results.

  11. Joint Spatio-Temporal Shared Component Model with an Application in Iran Cancer Data

    PubMed

    Mahaki, Behzad; Mehrabi, Yadollah; Kavousi, Amir; Schmid, Volker J

    2018-06-25

    Background: Among the proposals for joint disease mapping, the shared component model has become more popular. Another advance to strengthen inference of disease data is the extension of purely spatial models to include time aspect. We aim to combine the idea of multivariate shared components with spatio-temporal modelling in a joint disease mapping model and apply it for incidence rates of seven prevalent cancers in Iran which together account for approximately 50% of all cancers. Methods: In the proposed model, each component is shared by different subsets of diseases, spatial and temporal trends are considered for each component, and the relative weight of these trends for each component for each relevant disease can be estimated. Results: For esophagus and stomach cancers the Northern provinces was the area of high risk. For colorectal cancer Gilan, Semnan, Fars, Isfahan, Yazd and East-Azerbaijan were the highest risk provinces. For bladder and lung cancer, the northwest were the highest risk area. For prostate and breast cancers, Isfahan, Yazd, Fars, Tehran, Semnan, Mazandaran and Khorasane-Razavi were the highest risk part. The smoking component, shared by esophagus, stomach, bladder and lung, had more effect in Gilan, Mazandaran, Chaharmahal and Bakhtiari, Kohgilouyeh and Boyerahmad, Ardebil and Tehran provinces, in turn. For overweight and obesity component, shared by esophagus, colorectal, prostate and breast cancers the largest effect was found for Tehran, Khorasane-Razavi, Semnan, Yazd, Isfahan, Fars, Mazandaran and Gilan, in turn. For low physical activity component, shared by colorectal and breast cancers North-Khorasan, Ardebil, Golestan, Ilam, Khorasane-Razavi and South-Khorasan had the largest effects, in turn. The smoking component is significantly more important for stomach than for esophagus, bladder and lung. The overweight and obesity had significantly more effect for colorectal than of esophagus cancer. Conclusions: The presented model is a

  12. Imaging technologies for preclinical models of bone and joint disorders

    PubMed Central

    2011-01-01

    Preclinical models for musculoskeletal disorders are critical for understanding the pathogenesis of bone and joint disorders in humans and the development of effective therapies. The assessment of these models primarily relies on morphological analysis which remains time consuming and costly, requiring large numbers of animals to be tested through different stages of the disease. The implementation of preclinical imaging represents a keystone in the refinement of animal models allowing longitudinal studies and enabling a powerful, non-invasive and clinically translatable way for monitoring disease progression in real time. Our aim is to highlight examples that demonstrate the advantages and limitations of different imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), single-photon emission computed tomography (SPECT) and optical imaging. All of which are in current use in preclinical skeletal research. MRI can provide high resolution of soft tissue structures, but imaging requires comparatively long acquisition times; hence, animals require long-term anaesthesia. CT is extensively used in bone and joint disorders providing excellent spatial resolution and good contrast for bone imaging. Despite its excellent structural assessment of mineralized structures, CT does not provide in vivo functional information of ongoing biological processes. Nuclear medicine is a very promising tool for investigating functional and molecular processes in vivo with new tracers becoming available as biomarkers. The combined use of imaging modalities also holds significant potential for the assessment of disease pathogenesis in animal models of musculoskeletal disorders, minimising the use of conventional invasive methods and animal redundancy. PMID:22214535

  13. Incorporating imperfect detection into joint models of communites: A response to Warton et al.

    USGS Publications Warehouse

    Beissinger, Steven R.; Iknayan, Kelly J.; Guillera-Arroita, Gurutzeta; Zipkin, Elise; Dorazio, Robert; Royle, Andy; Kery, Marc

    2016-01-01

    Warton et al. [1] advance community ecology by describing a statistical framework that can jointly model abundances (or distributions) across many taxa to quantify how community properties respond to environmental variables. This framework specifies the effects of both measured and unmeasured (latent) variables on the abundance (or occurrence) of each species. Latent variables are random effects that capture the effects of both missing environmental predictors and correlations in parameter values among different species. As presented in Warton et al., however, the joint modeling framework fails to account for the common problem of detection or measurement errors that always accompany field sampling of abundance or occupancy, and are well known to obscure species- and community-level inferences.

  14. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    PubMed

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  15. An equilibrium-point model of electromyographic patterns during single-joint movements based on experimentally reconstructed control signals.

    PubMed

    Latash, M L; Goodman, S R

    1994-01-01

    The purpose of this work has been to develop a model of electromyographic (EMG) patterns during single-joint movements based on a version of the equilibrium-point hypothesis, a method for experimental reconstruction of the joint compliant characteristics, the dual-strategy hypothesis, and a kinematic model of movement trajectory. EMG patterns are considered emergent properties of hypothetical control patterns that are equally affected by the control signals and peripheral feedback reflecting actual movement trajectory. A computer model generated the EMG patterns based on simulated movement kinematics and hypothetical control signals derived from the reconstructed joint compliant characteristics. The model predictions have been compared to published recordings of movement kinematics and EMG patterns in a variety of movement conditions, including movements over different distances, at different speeds, against different-known inertial loads, and in conditions of possible unexpected decrease in the inertial load. Changes in task parameters within the model led to simulated EMG patterns qualitatively similar to the experimentally recorded EMG patterns. The model's predictive power compares it favourably to the existing models of the EMG patterns. Copyright © 1994. Published by Elsevier Ltd.

  16. Experimental Investigation on the Fatigue Mechanical Properties of Intermittently Jointed Rock Models Under Cyclic Uniaxial Compression with Different Loading Parameters

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Dai, Feng; Dong, Lu; Xu, Nuwen; Feng, Peng

    2018-01-01

    Intermittently jointed rocks, widely existing in many mining and civil engineering structures, are quite susceptible to cyclic loading. Understanding the fatigue mechanism of jointed rocks is vital to the rational design and the long-term stability analysis of rock structures. In this study, the fatigue mechanical properties of synthetic jointed rock models under different cyclic conditions are systematically investigated in the laboratory, including four loading frequencies, four maximum stresses, and four amplitudes. Our experimental results reveal the influence of the three cyclic loading parameters on the mechanical properties of jointed rock models, regarding the fatigue deformation characteristics, the fatigue energy and damage evolution, and the fatigue failure and progressive failure behavior. Under lower loading frequency or higher maximum stress and amplitude, the jointed specimen is characterized by higher fatigue deformation moduli and higher dissipated hysteresis energy, resulting in higher cumulative damage and lower fatigue life. However, the fatigue failure modes of jointed specimens are independent of cyclic loading parameters; all tested jointed specimens exhibit a prominent tensile splitting failure mode. Three different crack coalescence patterns are classified between two adjacent joints. Furthermore, different from the progressive failure under static monotonic loading, the jointed rock specimens under cyclic compression fail more abruptly without evident preceding signs. The tensile cracks on the front surface of jointed specimens always initiate from the joint tips and then propagate at a certain angle with the joints toward the direction of maximum compression.

  17. On the apparent insignificance of the randomness of flexible joints on large space truss dynamics

    NASA Technical Reports Server (NTRS)

    Koch, R. M.; Klosner, J. M.

    1993-01-01

    Deployable periodic large space structures have been shown to exhibit high dynamic sensitivity to period-breaking imperfections and uncertainties. These can be brought on by manufacturing or assembly errors, structural imperfections, as well as nonlinear and/or nonconservative joint behavior. In addition, the necessity of precise pointing and position capability can require the consideration of these usually negligible and unknown parametric uncertainties and their effect on the overall dynamic response of large space structures. This work describes the use of a new design approach for the global dynamic solution of beam-like periodic space structures possessing parametric uncertainties. Specifically, the effect of random flexible joints on the free vibrations of simply-supported periodic large space trusses is considered. The formulation is a hybrid approach in terms of an extended Timoshenko beam continuum model, Monte Carlo simulation scheme, and first-order perturbation methods. The mean and mean-square response statistics for a variety of free random vibration problems are derived for various input random joint stiffness probability distributions. The results of this effort show that, although joint flexibility has a substantial effect on the modal dynamic response of periodic large space trusses, the effect of any reasonable uncertainty or randomness associated with these joint flexibilities is insignificant.

  18. Joint PET-MR respiratory motion models for clinical PET motion correction

    NASA Astrophysics Data System (ADS)

    Manber, Richard; Thielemans, Kris; Hutton, Brian F.; Wan, Simon; McClelland, Jamie; Barnes, Anna; Arridge, Simon; Ourselin, Sébastien; Atkinson, David

    2016-09-01

    Patient motion due to respiration can lead to artefacts and blurring in positron emission tomography (PET) images, in addition to quantification errors. The integration of PET with magnetic resonance (MR) imaging in PET-MR scanners provides complementary clinical information, and allows the use of high spatial resolution and high contrast MR images to monitor and correct motion-corrupted PET data. In this paper we build on previous work to form a methodology for respiratory motion correction of PET data, and show it can improve PET image quality whilst having minimal impact on clinical PET-MR protocols. We introduce a joint PET-MR motion model, using only 1 min per PET bed position of simultaneously acquired PET and MR data to provide a respiratory motion correspondence model that captures inter-cycle and intra-cycle breathing variations. In the model setup, 2D multi-slice MR provides the dynamic imaging component, and PET data, via low spatial resolution framing and principal component analysis, provides the model surrogate. We evaluate different motion models (1D and 2D linear, and 1D and 2D polynomial) by computing model-fit and model-prediction errors on dynamic MR images on a data set of 45 patients. Finally we apply the motion model methodology to 5 clinical PET-MR oncology patient datasets. Qualitative PET reconstruction improvements and artefact reduction are assessed with visual analysis, and quantitative improvements are calculated using standardised uptake value (SUVpeak and SUVmax) changes in avid lesions. We demonstrate the capability of a joint PET-MR motion model to predict respiratory motion by showing significantly improved image quality of PET data acquired before the motion model data. The method can be used to incorporate motion into the reconstruction of any length of PET acquisition, with only 1 min of extra scan time, and with no external hardware required.

  19. Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach.

    DTIC Science & Technology

    1998-05-01

    Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for

  20. Probability and possibility-based representations of uncertainty in fault tree analysis.

    PubMed

    Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje

    2013-01-01

    Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.

  1. Resonances in the cumulative reaction probability for a model electronically nonadiabatic reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, J.; Bowman, J.M.

    1996-05-01

    The cumulative reaction probability, flux{endash}flux correlation function, and rate constant are calculated for a model, two-state, electronically nonadiabatic reaction, given by Shin and Light [S. Shin and J. C. Light, J. Chem. Phys. {bold 101}, 2836 (1994)]. We apply straightforward generalizations of the flux matrix/absorbing boundary condition approach of Miller and co-workers to obtain these quantities. The upper adiabatic electronic potential supports bound states, and these manifest themselves as {open_quote}{open_quote}recrossing{close_quote}{close_quote} resonances in the cumulative reaction probability, at total energies above the barrier to reaction on the lower adiabatic potential. At energies below the barrier, the cumulative reaction probability for themore » coupled system is shifted to higher energies relative to the one obtained for the ground state potential. This is due to the effect of an additional effective barrier caused by the nuclear kinetic operator acting on the ground state, adiabatic electronic wave function, as discussed earlier by Shin and Light. Calculations are reported for five sets of electronically nonadiabatic coupling parameters. {copyright} {ital 1996 American Institute of Physics.}« less

  2. Traumatic synovitis in a classical guitarist: a study of joint laxity.

    PubMed

    Bird, H A; Wright, V

    1981-04-01

    A classical guitarist performing for at least 5 hours each day developed a traumatic synovitis at the left wrist joint that was first erroneously considered to be rheumatoid arthritis. Comparison with members of the same guitar class suggested that unusual joint laxity of the fingers and wrist, probably inherited from the patient's father, was of more importance in the aetiology of the synovitis than a wide range of movement acquired by regular practice. Hyperextension of the metacarpophalangeal joint of the left index finger, quantified by the hyperextensometer, was less marked in the guitarists than in 100 normal individuals. This may be attributed to greater muscular control of the fingers. Lateral instability in the loaded joint may be the most important factor in the aetiology of traumatic synovitis.

  3. Quantification of joint inflammation in rheumatoid arthritis by time-resolved diffuse optical spectroscopy and tracer kinetic modeling

    NASA Astrophysics Data System (ADS)

    Ioussoufovitch, Seva; Morrison, Laura B.; Lee, Ting-Yim; St. Lawrence, Keith; Diop, Mamadou

    2015-03-01

    Rheumatoid arthritis (RA) is characterized by chronic synovial inflammation, which can cause progressive joint damage and disability. Diffuse optical spectroscopy (DOS) and imaging have the potential to become potent monitoring tools for RA. We devised a method that combined time-resolved DOS and tracer kinetics modeling to rapidly and reliably quantify blood flow in the joint. Preliminary results obtained from two animals show that the technique can detect joint inflammation as early as 5 days after onset.

  4. A Conditional Joint Modeling Approach for Locally Dependent Item Responses and Response Times

    ERIC Educational Resources Information Center

    Meng, Xiang-Bin; Tao, Jian; Chang, Hua-Hua

    2015-01-01

    The assumption of conditional independence between the responses and the response times (RTs) for a given person is common in RT modeling. However, when the speed of a test taker is not constant, this assumption will be violated. In this article we propose a conditional joint model for item responses and RTs, which incorporates a covariance…

  5. Couple Consensus during Marital Joint Decision-Making: A Context, Process, Outcome Model.

    ERIC Educational Resources Information Center

    Godwin, Deborah D.; Scanzoni, John

    1989-01-01

    Tested conceptual model of context, processes, and outcomes of joint marital decision making of married couples (N=188) which specified spouses' process variables as individual-level measures and partners' consensus as a couple construct. Found context factor of spouses' emotional interdependence influenced both partners' coerciveness and degree…

  6. Multi-scale Characterization and Modeling of Surface Slope Probability Distribution for ~20-km Diameter Lunar Craters

    NASA Astrophysics Data System (ADS)

    Mahanti, P.; Robinson, M. S.; Boyd, A. K.

    2013-12-01

    Craters ~20-km diameter and above significantly shaped the lunar landscape. The statistical nature of the slope distribution on their walls and floors dominate the overall slope distribution statistics for the lunar surface. Slope statistics are inherently useful for characterizing the current topography of the surface, determining accurate photometric and surface scattering properties, and in defining lunar surface trafficability [1-4]. Earlier experimental studies on the statistical nature of lunar surface slopes were restricted either by resolution limits (Apollo era photogrammetric studies) or by model error considerations (photoclinometric and radar scattering studies) where the true nature of slope probability distribution was not discernible at baselines smaller than a kilometer[2,3,5]. Accordingly, historical modeling of lunar surface slopes probability distributions for applications such as in scattering theory development or rover traversability assessment is more general in nature (use of simple statistical models such as the Gaussian distribution[1,2,5,6]). With the advent of high resolution, high precision topographic models of the Moon[7,8], slopes in lunar craters can now be obtained at baselines as low as 6-meters allowing unprecedented multi-scale (multiple baselines) modeling possibilities for slope probability distributions. Topographic analysis (Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) 2-m digital elevation models (DEM)) of ~20-km diameter Copernican lunar craters revealed generally steep slopes on interior walls (30° to 36°, locally exceeding 40°) over 15-meter baselines[9]. In this work, we extend the analysis from a probability distribution modeling point-of-view with NAC DEMs to characterize the slope statistics for the floors and walls for the same ~20-km Copernican lunar craters. The difference in slope standard deviations between the Gaussian approximation and the actual distribution (2-meter sampling) was

  7. An assessment of finite-element modeling techniques for thick-solid/thin-shell joints analysis

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Androlake, S. G.

    1993-01-01

    The subject of finite-element modeling has long been of critical importance to the practicing designer/analyst who is often faced with obtaining an accurate and cost-effective structural analysis of a particular design. Typically, these two goals are in conflict. The purpose is to discuss the topic of finite-element modeling for solid/shell connections (joints) which are significant for the practicing modeler. Several approaches are currently in use, but frequently various assumptions restrict their use. Such techniques currently used in practical applications were tested, especially to see which technique is the most ideally suited for the computer aided design (CAD) environment. Some basic thoughts regarding each technique are also discussed. As a consequence, some suggestions based on the results are given to lead reliable results in geometrically complex joints where the deformation and stress behavior are complicated.

  8. Validation of a computational knee joint model using an alignment method for the knee laxity test and computed tomography.

    PubMed

    Kang, Kyoung-Tak; Kim, Sung-Hwan; Son, Juhyun; Lee, Young Han; Koh, Yong-Gon

    2017-01-01

    Computational models have been identified as efficient techniques in the clinical decision-making process. However, computational model was validated using published data in most previous studies, and the kinematic validation of such models still remains a challenge. Recently, studies using medical imaging have provided a more accurate visualization of knee joint kinematics. The purpose of the present study was to perform kinematic validation for the subject-specific computational knee joint model by comparison with subject's medical imaging under identical laxity condition. The laxity test was applied to the anterior-posterior drawer under 90° flexion and the varus-valgus under 20° flexion with a series of stress radiographs, a Telos device, and computed tomography. The loading condition in the computational subject-specific knee joint model was identical to the laxity test condition in the medical image. Our computational model showed knee laxity kinematic trends that were consistent with the computed tomography images, except for negligible differences because of the indirect application of the subject's in vivo material properties. Medical imaging based on computed tomography with the laxity test allowed us to measure not only the precise translation but also the rotation of the knee joint. This methodology will be beneficial in the validation of laxity tests for subject- or patient-specific computational models.

  9. The Finite-Size Scaling Relation for the Order-Parameter Probability Distribution of the Six-Dimensional Ising Model

    NASA Astrophysics Data System (ADS)

    Merdan, Ziya; Karakuş, Özlem

    2016-11-01

    The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.

  10. Recent advances in computational mechanics of the human knee joint.

    PubMed

    Kazemi, M; Dabiri, Y; Li, L P

    2013-01-01

    Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling.

  11. Pressure application measurement (PAM): a novel behavioural technique for measuring hypersensitivity in a rat model of joint pain.

    PubMed

    Barton, Nicola J; Strickland, Iain T; Bond, Susan M; Brash, Harry M; Bate, Simon T; Wilson, Alex W; Chessell, Iain P; Reeve, Alison J; McQueen, Daniel S

    2007-06-15

    Chronic joint pain affects physical well being and can lead to severe psychological and social problems, therefore successful long-term management is highly sought-after. No current behavioural measures of pain used in pre-clinical models mimic the clinical dolorimeter, which provides an objective measure of joint hypersensitivity. In this study we aim to use a novel behavioural readout alongside an established measure to mimic the multifactorial measurements taken in the clinic. Using the pressure application measurement (PAM) device a gradually increasing squeeze was applied across the knee joint of rats until the animal gave an indication of pain or discomfort. PAM and the incapacitance tester were used to detect joint hypersensitivity in a well-established rodent model of adjuvant-induced arthritis. Subsequently, the analgesic effects of prednisolone (1, 3 or 10 mg kg(-1)), morphine (3 mg kg(-1)) and celecoxib (15 mg kg(-1)) were assessed. Both PAM and the incapacitance tester detected a reversal of hypersensitivity 1h post-drug administration. Furthermore, the two readouts were highly correlated, and power analysis indicated that PAM was highly reproducible. In conclusion, PAM provides a novel, accurate behavioural tool for detecting a primary mechanical hypersensitivity in a rat model of chronic inflammatory joint pain.

  12. Analysis of a semiclassical model for rotational transition probabilities. [in highly nonequilibrium flow of diatomic molecules

    NASA Technical Reports Server (NTRS)

    Deiwert, G. S.; Yoshikawa, K. K.

    1975-01-01

    A semiclassical model proposed by Pearson and Hansen (1974) for computing collision-induced transition probabilities in diatomic molecules is tested by the direct-simulation Monte Carlo method. Specifically, this model is described by point centers of repulsion for collision dynamics, and the resulting classical trajectories are used in conjunction with the Schroedinger equation for a rigid-rotator harmonic oscillator to compute the rotational energy transition probabilities necessary to evaluate the rotation-translation exchange phenomena. It is assumed that a single, average energy spacing exists between the initial state and possible final states for a given collision.

  13. Synchrony in Joint Action Is Directed by Each Participant’s Motor Control System

    PubMed Central

    Noy, Lior; Weiser, Netta; Friedman, Jason

    2017-01-01

    In this work, we ask how the probability of achieving synchrony in joint action is affected by the choice of motion parameters of each individual. We use the mirror game paradigm to study how changes in leader’s motion parameters, specifically frequency and peak velocity, affect the probability of entering the state of co-confidence (CC) motion: a dyadic state of synchronized, smooth and co-predictive motions. In order to systematically study this question, we used a one-person version of the mirror game, where the participant mirrored piece-wise rhythmic movements produced by a computer on a graphics tablet. We systematically varied the frequency and peak velocity of the movements to determine how these parameters affect the likelihood of synchronized joint action. To assess synchrony in the mirror game we used the previously developed marker of co-confident (CC) motions: smooth, jitter-less and synchronized motions indicative of co-predicative control. We found that when mirroring movements with low frequencies (i.e., long duration movements), the participants never showed CC, and as the frequency of the stimuli increased, the probability of observing CC also increased. This finding is discussed in the framework of motor control studies showing an upper limit on the duration of smooth motion. We confirmed the relationship between motion parameters and the probability to perform CC with three sets of data of open-ended two-player mirror games. These findings demonstrate that when performing movements together, there are optimal movement frequencies to use in order to maximize the possibility of entering a state of synchronized joint action. It also shows that the ability to perform synchronized joint action is constrained by the properties of our motor control systems. PMID:28443047

  14. Bayesian Estimation of the DINA Model with Gibbs Sampling

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2015-01-01

    A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…

  15. A comprehensive model to determine the effects of temperature and species fluctuations on reactions in turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Antaki, P. J.

    1981-01-01

    The joint probability distribution function (pdf), which is a modification of the bivariate Gaussian pdf, is discussed and results are presented for a global reaction model using the joint pdf. An alternative joint pdf is discussed. A criterion which permits the selection of temperature pdf's in different regions of turbulent, reacting flow fields is developed. Two principal approaches to the determination of reaction rates in computer programs containing detailed chemical kinetics are outlined. These models represent a practical solution to the modeling of species reaction rates in turbulent, reacting flows.

  16. Match probabilities in a finite, subdivided population

    PubMed Central

    Malaspinas, Anna-Sapfo; Slatkin, Montgomery; Song, Yun S.

    2011-01-01

    We generalize a recently introduced graphical framework to compute the probability that haplotypes or genotypes of two individuals drawn from a finite, subdivided population match. As in the previous work, we assume an infinite-alleles model. We focus on the case of a population divided into two subpopulations, but the underlying framework can be applied to a general model of population subdivision. We examine the effect of population subdivision on the match probabilities and the accuracy of the product rule which approximates multi-locus match probabilities as a product of one-locus match probabilities. We quantify the deviation from predictions of the product rule by R, the ratio of the multi-locus match probability to the product of the one-locus match probabilities.We carry out the computation for two loci and find that ignoring subdivision can lead to underestimation of the match probabilities if the population under consideration actually has subdivision structure and the individuals originate from the same subpopulation. On the other hand, under a given model of population subdivision, we find that the ratio R for two loci is only slightly greater than 1 for a large range of symmetric and asymmetric migration rates. Keeping in mind that the infinite-alleles model is not the appropriate mutation model for STR loci, we conclude that, for two loci and biologically reasonable parameter values, population subdivision may lead to results that disfavor innocent suspects because of an increase in identity-by-descent in finite populations. On the other hand, for the same range of parameters, population subdivision does not lead to a substantial increase in linkage disequilibrium between loci. Those results are consistent with established practice. PMID:21266180

  17. Shear Model Development of Limestone Joints with Incorporating Variations of Basic Friction Coefficient and Roughness Components During Shearing

    NASA Astrophysics Data System (ADS)

    Mehrishal, Seyedahmad; Sharifzadeh, Mostafa; Shahriar, Korosh; Song, Jae-Jon

    2017-04-01

    In relation to the shearing of rock joints, the precise and continuous evaluation of asperity interlocking, dilation, and basic friction properties has been the most important task in the modeling of shear strength. In this paper, in order to investigate these controlling factors, two types of limestone joint samples were prepared and CNL direct shear tests were performed on these joints under various shear conditions. One set of samples were travertine and another were onyx marble with slickensided surfaces, surfaces ground to #80, and rough surfaces were tested. Direct shear experiments conducted on slickensided and ground surfaces of limestone indicated that by increasing the applied normal stress, under different shearing rates, the basic friction coefficient decreased. Moreover, in the shear tests under constant normal stress and shearing rate, the basic friction coefficient remained constant for the different contact sizes. The second series of direct shear experiments in this research was conducted on tension joint samples to evaluate the effect of surface roughness on the shear behavior of the rough joints. This paper deals with the dilation and roughness interlocking using a method that characterizes the surface roughness of the joint based on a fundamental combined surface roughness concept. The application of stress-dependent basic friction and quantitative roughness parameters in the continuous modeling of the shear behavior of rock joints is an important aspect of this research.

  18. Jointly modeling longitudinal proportional data and survival times with an application to the quality of life data in a breast cancer trial.

    PubMed

    Song, Hui; Peng, Yingwei; Tu, Dongsheng

    2017-04-01

    Motivated by the joint analysis of longitudinal quality of life data and recurrence free survival times from a cancer clinical trial, we present in this paper two approaches to jointly model the longitudinal proportional measurements, which are confined in a finite interval, and survival data. Both approaches assume a proportional hazards model for the survival times. For the longitudinal component, the first approach applies the classical linear mixed model to logit transformed responses, while the second approach directly models the responses using a simplex distribution. A semiparametric method based on a penalized joint likelihood generated by the Laplace approximation is derived to fit the joint model defined by the second approach. The proposed procedures are evaluated in a simulation study and applied to the analysis of breast cancer data motivated this research.

  19. Modeling the effect of reward amount on probability discounting.

    PubMed

    Myerson, Joel; Green, Leonard; Morris, Joshua

    2011-03-01

    The present study with college students examined the effect of amount on the discounting of probabilistic monetary rewards. A hyperboloid function accurately described the discounting of hypothetical rewards ranging in amount from $20 to $10,000,000. The degree of discounting increased continuously with amount of probabilistic reward. This effect of amount was not due to changes in the rate parameter of the discounting function, but rather was due to increases in the exponent. These results stand in contrast to those observed with the discounting of delayed monetary rewards, in which the degree of discounting decreases with reward amount due to amount-dependent decreases in the rate parameter. Taken together, this pattern of results suggests that delay and probability discounting reflect different underlying mechanisms. That is, the fact that the exponent in the delay discounting function is independent of amount is consistent with a psychophysical scaling interpretation, whereas the finding that the exponent of the probability-discounting function is amount-dependent is inconsistent with such an interpretation. Instead, the present results are consistent with the idea that the probability-discounting function is itself the product of a value function and a weighting function. This idea was first suggested by Kahneman and Tversky (1979), although their prospect theory does not predict amount effects like those observed. The effect of amount on probability discounting was parsimoniously incorporated into our hyperboloid discounting function by assuming that the exponent was proportional to the amount raised to a power. The amount-dependent exponent of the probability-discounting function may be viewed as reflecting the effect of amount on the weighting of the probability with which the reward will be received.

  20. Modeling the finger joint moments in a hand at the maximal isometric grip: the effects of friction.

    PubMed

    Wu, John Z; Dong, Ren G; McDowell, Thomas W; Welcome, Daniel E

    2009-12-01

    The interaction between the handle and operator's hand affects the comfort and safety of tool and machine operations. In most of the previous studies, the investigators considered only the normal contact forces. The effect of friction on the joint moments in fingers has not been analyzed. Furthermore, the observed contact forces have not been linked to the internal musculoskeletal loading in the previous experimental studies. In the current study, we proposed a universal model of a hand to evaluate the joint moments in the fingers during grasping tasks. The hand model was developed on the platform of the commercial software package AnyBody. Only four fingers (index, long, ring, and little finger) were included in the model. The anatomical structure of each finger is comprised of four phalanges (distal, middle, proximal, and metacarpal phalange). The simulations were performed using an inverse dynamics technique. The joint angles and the normal contact forces on each finger section reported by previous researchers were used as inputs, while the joint moments of each finger were predicted. The predicted trends of the dependence of the distal interphalangeal (DIP) and proximal interphalangeal (PIP) joint moments on the cylinder diameter agree with those of the contact forces on the fingers observed in the previous experimental study. Our results show that the DIP and PIP joint moments reach their maximums at a cylinder diameter of about 31mm, which is consistent with the trend of the finger contact forces measured in the experiments. The proposed approach will be useful for simulating musculoskeletal loading in the hand for occupational activities, thereby optimizing tool-handle design.

  1. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin

  2. TaggerOne: joint named entity recognition and normalization with semi-Markov Models.

    PubMed

    Leaman, Robert; Lu, Zhiyong

    2016-09-15

    Text mining is increasingly used to manage the accelerating pace of the biomedical literature. Many text mining applications depend on accurate named entity recognition (NER) and normalization (grounding). While high performing machine learning methods trainable for many entity types exist for NER, normalization methods are usually specialized to a single entity type. NER and normalization systems are also typically used in a serial pipeline, causing cascading errors and limiting the ability of the NER system to directly exploit the lexical information provided by the normalization. We propose the first machine learning model for joint NER and normalization during both training and prediction. The model is trainable for arbitrary entity types and consists of a semi-Markov structured linear classifier, with a rich feature approach for NER and supervised semantic indexing for normalization. We also introduce TaggerOne, a Java implementation of our model as a general toolkit for joint NER and normalization. TaggerOne is not specific to any entity type, requiring only annotated training data and a corresponding lexicon, and has been optimized for high throughput. We validated TaggerOne with multiple gold-standard corpora containing both mention- and concept-level annotations. Benchmarking results show that TaggerOne achieves high performance on diseases (NCBI Disease corpus, NER f-score: 0.829, normalization f-score: 0.807) and chemicals (BioCreative 5 CDR corpus, NER f-score: 0.914, normalization f-score 0.895). These results compare favorably to the previous state of the art, notwithstanding the greater flexibility of the model. We conclude that jointly modeling NER and normalization greatly improves performance. The TaggerOne source code and an online demonstration are available at: http://www.ncbi.nlm.nih.gov/bionlp/taggerone zhiyong.lu@nih.gov Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written

  3. Investigations of turbulent scalar fields using probability density function approach

    NASA Technical Reports Server (NTRS)

    Gao, Feng

    1991-01-01

    Scalar fields undergoing random advection have attracted much attention from researchers in both the theoretical and practical sectors. Research interest spans from the study of the small scale structures of turbulent scalar fields to the modeling and simulations of turbulent reacting flows. The probability density function (PDF) method is an effective tool in the study of turbulent scalar fields, especially for those which involve chemical reactions. It has been argued that a one-point, joint PDF approach is the one to choose from among many simulation and closure methods for turbulent combustion and chemically reacting flows based on its practical feasibility in the foreseeable future for multiple reactants. Instead of the multi-point PDF, the joint PDF of a scalar and its gradient which represents the roles of both scalar and scalar diffusion is introduced. A proper closure model for the molecular diffusion term in the PDF equation is investigated. Another direction in this research is to study the mapping closure method that has been recently proposed to deal with the PDF's in turbulent fields. This method seems to have captured the physics correctly when applied to diffusion problems. However, if the turbulent stretching is included, the amplitude mapping has to be supplemented by either adjusting the parameters representing turbulent stretching at each time step or by introducing the coordinate mapping. This technique is still under development and seems to be quite promising. The final objective of this project is to understand some fundamental properties of the turbulent scalar fields and to develop practical numerical schemes that are capable of handling turbulent reacting flows.

  4. Computer simulation of solder joint failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burchett, S.N.; Frear, D.R.; Rashid, M.M.

    The thermomechanical fatigue failure of solder joints is increasingly becoming an important reliability issue for electronic packages. The purpose of this Laboratory Directed Research and Development (LDRD) project was to develop computational tools for simulating the behavior of solder joints under strain and temperature cycling, taking into account the microstructural heterogeneities that exist in as-solidified near eutectic Sn-Pb joints, as well as subsequent microstructural evolution. The authors present two computational constitutive models, a two-phase model and a single-phase model, that were developed to predict the behavior of near eutectic Sn-Pb solder joints under fatigue conditions. Unique metallurgical tests provide themore » fundamental input for the constitutive relations. The two-phase model mathematically predicts the heterogeneous coarsening behavior of near eutectic Sn-Pb solder. The finite element simulations with this model agree qualitatively with experimental thermomechanical fatigue tests. The simulations show that the presence of an initial heterogeneity in the solder microstructure could significantly degrade the fatigue lifetime. The single-phase model was developed to predict solder joint behavior using materials data for constitutive relation constants that could be determined through straightforward metallurgical experiments. Special thermomechanical fatigue tests were developed to give fundamental materials input to the models, and an in situ SEM thermomechanical fatigue test system was developed to characterize microstructural evolution and the mechanical behavior of solder joints during the test. A shear/torsion test sample was developed to impose strain in two different orientations. Materials constants were derived from these tests. The simulation results from the two-phase model showed good fit to the experimental test results.« less

  5. Biomechanical analysis comparing natural and alloplastic temporomandibular joint replacement using a finite element model.

    PubMed

    Mesnard, Michel; Ramos, Antonio; Ballu, Alex; Morlier, Julien; Cid, M; Simoes, J A

    2011-04-01

    Prosthetic materials and bone present quite different mechanical properties. Consequently, mandible reconstruction with metallic materials (or a mandible condyle implant) modifies the physiologic behavior of the mandible (stress, strain patterns, and condyle displacements). The changing of bone strain distribution results in an adaptation of the temporomandibular joint, including articular contacts. Using a validated finite element model, the natural mandible strains and condyle displacements were evaluated. Modifications of strains and displacements were then assessed for 2 different temporomandibular joint implants. Because materials and geometry play important key roles, mechanical properties of cortical bone were taken into account in models used in finite element analysis. The finite element model allowed verification of the worst loading configuration of the mandibular condyle. Replacing the natural condyle by 1 of the 2 tested implants, the results also show the importance of the implant geometry concerning biomechanical mandibular behavior. The implant geometry and stiffness influenced mainly strain distribution. The different forces applied to the mandible by the elevator muscles, teeth, and joint loads indicate that the finite element model is a relevant tool to optimize implant geometry or, in a subsequent study, to choose a more suitable distribution of the screws. Bone screws (number and position) have a significant influence on mandibular behavior and on implant stress pattern. Stress concentration and implant fracture must be avoided. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  6. Design, analysis and verification of a knee joint oncological prosthesis finite element model.

    PubMed

    Zach, Lukáš; Kunčická, Lenka; Růžička, Pavel; Kocich, Radim

    2014-11-01

    The aim of this paper was to design a finite element model for a hinged PROSPON oncological knee endoprosthesis and to verify the model by comparison with ankle flexion angle using knee-bending experimental data obtained previously. Visible Human Project CT scans were used to create a general lower extremity bones model and to compose a 3D CAD knee joint model to which muscles and ligaments were added. Into the assembly the designed finite element PROSPON prosthesis model was integrated and an analysis focused on the PEEK-OPTIMA hinge pin bushing stress state was carried out. To confirm the stress state analysis results, contact pressure was investigated. The analysis was performed in the knee-bending position within 15.4-69.4° hip joint flexion range. The results showed that the maximum stress achieved during the analysis (46.6 MPa) did not exceed the yield strength of the material (90 MPa); the condition of plastic stability was therefore met. The stress state analysis results were confirmed by the distribution of contact pressure during knee-bending. The applicability of our designed finite element model for the real implant behaviour prediction was proven on the basis of good correlation of the analytical and experimental ankle flexion angle data. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Kinematic models of the upper limb joints for multibody kinematics optimisation: An overview.

    PubMed

    Duprey, Sonia; Naaim, Alexandre; Moissenet, Florent; Begon, Mickaël; Chèze, Laurence

    2017-09-06

    Soft tissue artefact (STA), i.e. the motion of the skin, fat and muscles gliding on the underlying bone, may lead to a marker position error reaching up to 8.7cm for the particular case of the scapula. Multibody kinematics optimisation (MKO) is one of the most efficient approaches used to reduce STA. It consists in minimising the distance between the positions of experimental markers on a subject skin and the simulated positions of the same markers embedded on a kinematic model. However, the efficiency of MKO directly relies on the chosen kinematic model. This paper proposes an overview of the different upper limb models available in the literature and a discussion about their applicability to MKO. The advantages of each joint model with respect to its biofidelity to functional anatomy are detailed both for the shoulder and the forearm areas. Models capabilities of personalisation and of adaptation to pathological cases are also discussed. Concerning model efficiency in terms of STA reduction in MKO algorithms, a lack of quantitative assessment in the literature is noted. In priority, future studies should concern the evaluation and quantification of STA reduction depending on upper limb joint constraints. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Comparative assessment of knee joint models used in multi-body kinematics optimisation for soft tissue artefact compensation.

    PubMed

    Richard, Vincent; Cappozzo, Aurelio; Dumas, Raphaël

    2017-09-06

    Estimating joint kinematics from skin-marker trajectories recorded using stereophotogrammetry is complicated by soft tissue artefact (STA), an inexorable source of error. One solution is to use a bone pose estimator based on multi-body kinematics optimisation (MKO) embedding joint constraints to compensate for STA. However, there is some debate over the effectiveness of this method. The present study aimed to quantitatively assess the degree of agreement between reference (i.e., artefact-free) knee joint kinematics and the same kinematics estimated using MKO embedding six different knee joint models. The following motor tasks were assessed: level walking, hopping, cutting, running, sit-to-stand, and step-up. Reference knee kinematics was taken from pin-marker or biplane fluoroscopic data acquired concurrently with skin-marker data, made available by the respective authors. For each motor task, Bland-Altman analysis revealed that the performance of MKO varied according to the joint model used, with a wide discrepancy in results across degrees of freedom (DoFs), models and motor tasks (with a bias between -10.2° and 13.2° and between -10.2mm and 7.2mm, and with a confidence interval up to ±14.8° and ±11.1mm, for rotation and displacement, respectively). It can be concluded that, while MKO might occasionally improve kinematics estimation, as implemented to date it does not represent a reliable solution to the STA issue. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Noninflammatory Joint Contractures Arising from Immobility: Animal Models to Future Treatments

    PubMed Central

    Wong, Kayleigh; Trudel, Guy; Laneuville, Odette

    2015-01-01

    Joint contractures, defined as the limitation in the passive range of motion of a mobile joint, can be classified as noninflammatory diseases of the musculoskeletal system. The pathophysiology is not well understood; limited information is available on causal factors, progression, the pathophysiology involved, and prediction of response to treatment. The clinical heterogeneity of joint contractures combined with the heterogeneous contribution of joint connective tissues to joint mobility presents challenges to the study of joint contractures. Furthermore, contractures are often a symptom of a wide variety of heterogeneous disorders that are in many cases multifactorial. Extended immobility has been identified as a causal factor and evidence is provided from both experimental and epidemiology studies. Of interest is the involvement of the joint capsule in the pathophysiology of joint contractures and lack of response to remobilization. While molecular pathways involved in the development of joint contractures are being investigated, current treatments focus on physiotherapy, which is ineffective on irreversible contractures. Future treatments may include early diagnosis and prevention. PMID:26247029

  10. Recent Advances in Computational Mechanics of the Human Knee Joint

    PubMed Central

    Kazemi, M.; Dabiri, Y.; Li, L. P.

    2013-01-01

    Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling. PMID:23509602

  11. Patient-specific musculoskeletal modeling of the hip joint for preoperative planning of total hip arthroplasty: A validation study based on in vivo measurements

    PubMed Central

    Schick, Fabian; Asseln, Malte; Damm, Philipp; Radermacher, Klaus

    2018-01-01

    Validation of musculoskeletal models for application in preoperative planning is still a challenging task. Ideally, the simulation results of a patient-specific musculoskeletal model are compared to corresponding in vivo measurements. Currently, the only possibility to measure in vivo joint forces is to implant an instrumented prosthesis in patients undergoing a total joint replacement. In this study, a musculoskeletal model of the AnyBody Modeling System was adapted patient-specifically and validated against the in vivo hip joint force measurements of ten subjects performing one-leg stance and level walking. The impact of four model parameters was evaluated; hip joint width, muscle strength, muscle recruitment, and type of muscle model. The smallest difference between simulated and in vivo hip joint force was achieved by using the hip joint width measured in computed tomography images, a muscle strength of 90 N/cm2, a third order polynomial muscle recruitment, and a simple muscle model. This parameter combination reached mean deviations between simulation and in vivo measurement during the peak force phase of 12% ± 14% in magnitude and 11° ± 5° in orientation for one-leg stance and 8% ± 6% in magnitude and 10° ± 5° in orientation for level walking. PMID:29649235

  12. Probability modeling of high flow extremes in Yingluoxia watershed, the upper reaches of Heihe River basin

    NASA Astrophysics Data System (ADS)

    Li, Zhanling; Li, Zhanjie; Li, Chengcheng

    2014-05-01

    Probability modeling of hydrological extremes is one of the major research areas in hydrological science. Most basins in humid and semi-humid south and east of China are concerned for probability modeling analysis of high flow extremes. While, for the inland river basin which occupies about 35% of the country area, there is a limited presence of such studies partly due to the limited data availability and a relatively low mean annual flow. The objective of this study is to carry out probability modeling of high flow extremes in the upper reach of Heihe River basin, the second largest inland river basin in China, by using the peak over threshold (POT) method and Generalized Pareto Distribution (GPD), in which the selection of threshold and inherent assumptions for POT series are elaborated in details. For comparison, other widely used probability distributions including generalized extreme value (GEV), Lognormal, Log-logistic and Gamma are employed as well. Maximum likelihood estimate is used for parameter estimations. Daily flow data at Yingluoxia station from 1978 to 2008 are used. Results show that, synthesizing the approaches of mean excess plot, stability features of model parameters, return level plot and the inherent independence assumption of POT series, an optimum threshold of 340m3/s is finally determined for high flow extremes in Yingluoxia watershed. The resulting POT series is proved to be stationary and independent based on Mann-Kendall test, Pettitt test and autocorrelation test. In terms of Kolmogorov-Smirnov test, Anderson-Darling test and several graphical diagnostics such as quantile and cumulative density function plots, GPD provides the best fit to high flow extremes in the study area. The estimated high flows for long return periods demonstrate that, as the return period increasing, the return level estimates are probably more uncertain. The frequency of high flow extremes exhibits a very slight but not significant decreasing trend from 1978 to

  13. Bayesian Finite Mixtures for Nonlinear Modeling of Educational Data.

    ERIC Educational Resources Information Center

    Tirri, Henry; And Others

    A Bayesian approach for finding latent classes in data is discussed. The approach uses finite mixture models to describe the underlying structure in the data and demonstrate that the possibility of using full joint probability models raises interesting new prospects for exploratory data analysis. The concepts and methods discussed are illustrated…

  14. Dynamic analysis of clamp band joint system subjected to axial vibration

    NASA Astrophysics Data System (ADS)

    Qin, Z. Y.; Yan, S. Z.; Chu, F. L.

    2010-10-01

    Clamp band joints are commonly used for connecting circular components together in industry. Some of the systems jointed by clamp band are subjected to dynamic load. However, very little research on the dynamic characteristics for this kind of joint can be found in the literature. In this paper, a dynamic model for clamp band joint system is developed. Contact and frictional slip between the components are accommodated in this model. Nonlinear finite element analysis is conducted to identify the model parameters. Then static experiments are carried out on a scaled model of the clamp band joint to validate the joint model. Finally, the model is adopted to study the dynamic characteristics of the clamp band joint system subjected to axial harmonic excitation and the effects of the wedge angle of the clamp band joint and the preload on the response. The model proposed in this paper can represent the nonlinearity of the clamp band joint and be used conveniently to investigate the effects of the structural and loading parameters on the dynamic characteristics of this type of joint system.

  15. Stimulus probability effects in absolute identification.

    PubMed

    Kent, Christopher; Lamberts, Koen

    2016-05-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of presentation probability on both proportion correct and response times. The effects were moderated by the ubiquitous stimulus position effect. The accuracy and response time data were predicted by an exemplar-based model of perceptual cognition (Kent & Lamberts, 2005). The bow in discriminability was also attenuated when presentation probability for middle items was relatively high, an effect that will constrain future model development. The study provides evidence for item-specific learning in absolute identification. Implications for other theories of absolute identification are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Probabilistic Simulation of Progressive Fracture in Bolted-Joint Composite Laminates

    NASA Technical Reports Server (NTRS)

    Minnetyan, L.; Singhal, S. N.; Chamis, C. C.

    1996-01-01

    This report describes computational methods to probabilistically simulate fracture in bolted composite structures. An innovative approach that is independent of stress intensity factors and fracture toughness was used to simulate progressive fracture. The effect of design variable uncertainties on structural damage was also quantified. A fast probability integrator assessed the scatter in the composite structure response before and after damage. Then the sensitivity of the response to design variables was computed. General-purpose methods, which are applicable to bolted joints in all types of structures and in all fracture processes-from damage initiation to unstable propagation and global structure collapse-were used. These methods were demonstrated for a bolted joint of a polymer matrix composite panel under edge loads. The effects of the fabrication process were included in the simulation of damage in the bolted panel. Results showed that the most effective way to reduce end displacement at fracture is to control both the load and the ply thickness. The cumulative probability for longitudinal stress in all plies was most sensitive to the load; in the 0 deg. plies it was very sensitive to ply thickness. The cumulative probability for transverse stress was most sensitive to the matrix coefficient of thermal expansion. In addition, fiber volume ratio and fiber transverse modulus both contributed significantly to the cumulative probability for the transverse stresses in all the plies.

  17. On the extinction probability in models of within-host infection: the role of latency and immunity.

    PubMed

    Yan, Ada W C; Cao, Pengxing; McCaw, James M

    2016-10-01

    Not every exposure to virus establishes infection in the host; instead, the small amount of initial virus could become extinct due to stochastic events. Different diseases and routes of transmission have a different average number of exposures required to establish an infection. Furthermore, the host immune response and antiviral treatment affect not only the time course of the viral load provided infection occurs, but can prevent infection altogether by increasing the extinction probability. We show that the extinction probability when there is a time-dependent immune response depends on the chosen form of the model-specifically, on the presence or absence of a delay between infection of a cell and production of virus, and the distribution of latent and infectious periods of an infected cell. We hypothesise that experimentally measuring the extinction probability when the virus is introduced at different stages of the immune response, alongside the viral load which is usually measured, will improve parameter estimates and determine the most suitable mathematical form of the model.

  18. Joint Discussion 14 Modeling dense stellar systems

    NASA Astrophysics Data System (ADS)

    Sills, Alison I.; Subr, Ladislav; Portegies Zwart, Simon F.

    2007-08-01

    Joint Discussion 14 was held at the General Assembly of the International Astronomical Union from August 17 until 23 in the beautiful Bohemian capital, Prague. The blueprints for this meeting were laid out during the MODEST-5 workshop, held in the Canadian city of Hamilton, Ontario in August 2004. We were sitting in a nice cafe with local brew and food, discussing the future of the MODEST community when we posed the idea for this Joint Discussion at the General Assembly. The meeting was then coined MODEST-7.

  19. Estimation of Time-Varying, Intrinsic and Reflex Dynamic Joint Stiffness during Movement. Application to the Ankle Joint

    PubMed Central

    Guarín, Diego L.; Kearney, Robert E.

    2017-01-01

    Dynamic joint stiffness determines the relation between joint position and torque, and plays a vital role in the control of posture and movement. Dynamic joint stiffness can be quantified during quasi-stationary conditions using disturbance experiments, where small position perturbations are applied to the joint and the torque response is recorded. Dynamic joint stiffness is composed of intrinsic and reflex mechanisms that act and change together, so that nonlinear, mathematical models and specialized system identification techniques are necessary to estimate their relative contributions to overall joint stiffness. Quasi-stationary experiments have demonstrated that dynamic joint stiffness is heavily modulated by joint position and voluntary torque. Consequently, during movement, when joint position and torque change rapidly, dynamic joint stiffness will be Time-Varying (TV). This paper introduces a new method to quantify the TV intrinsic and reflex components of dynamic joint stiffness during movement. The algorithm combines ensemble and deterministic approaches for estimation of TV systems; and uses a TV, parallel-cascade, nonlinear system identification technique to separate overall dynamic joint stiffness into intrinsic and reflex components from position and torque records. Simulation studies of a stiffness model, whose parameters varied with time as is expected during walking, demonstrated that the new algorithm accurately tracked the changes in dynamic joint stiffness using as little as 40 gait cycles. The method was also used to estimate the intrinsic and reflex dynamic ankle stiffness from an experiment with a healthy subject during which ankle movements were imposed while the subject maintained a constant muscle contraction. The method identified TV stiffness model parameters that predicted the measured torque very well, accounting for more than 95% of its variance. Moreover, both intrinsic and reflex dynamic stiffness were heavily modulated through the

  20. Fixation Probability in a Haploid-Diploid Population.

    PubMed

    Bessho, Kazuhiro; Otto, Sarah P

    2017-01-01

    Classical population genetic theory generally assumes either a fully haploid or fully diploid life cycle. However, many organisms exhibit more complex life cycles, with both free-living haploid and diploid stages. Here we ask what the probability of fixation is for selected alleles in organisms with haploid-diploid life cycles. We develop a genetic model that considers the population dynamics using both the Moran model and Wright-Fisher model. Applying a branching process approximation, we obtain an accurate fixation probability assuming that the population is large and the net effect of the mutation is beneficial. We also find the diffusion approximation for the fixation probability, which is accurate even in small populations and for deleterious alleles, as long as selection is weak. These fixation probabilities from branching process and diffusion approximations are similar when selection is weak for beneficial mutations that are not fully recessive. In many cases, particularly when one phase predominates, the fixation probability differs substantially for haploid-diploid organisms compared to either fully haploid or diploid species. Copyright © 2017 by the Genetics Society of America.