Sample records for maximum posterior probability

  1. optBINS: Optimal Binning for histograms

    NASA Astrophysics Data System (ADS)

    Knuth, Kevin H.

    2018-03-01

    optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.

  2. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Russa, D

    Purpose: The purpose of this project is to develop a robust method of parameter estimation for a Poisson-based TCP model using Bayesian inference. Methods: Bayesian inference was performed using the PyMC3 probabilistic programming framework written in Python. A Poisson-based TCP regression model that accounts for clonogen proliferation was fit to observed rates of local relapse as a function of equivalent dose in 2 Gy fractions for a population of 623 stage-I non-small-cell lung cancer patients. The Slice Markov Chain Monte Carlo sampling algorithm was used to sample the posterior distributions, and was initiated using the maximum of the posterior distributionsmore » found by optimization. The calculation of TCP with each sample step required integration over the free parameter α, which was performed using an adaptive 24-point Gauss-Legendre quadrature. Convergence was verified via inspection of the trace plot and posterior distribution for each of the fit parameters, as well as with comparisons of the most probable parameter values with their respective maximum likelihood estimates. Results: Posterior distributions for α, the standard deviation of α (σ), the average tumour cell-doubling time (Td), and the repopulation delay time (Tk), were generated assuming α/β = 10 Gy, and a fixed clonogen density of 10{sup 7} cm−{sup 3}. Posterior predictive plots generated from samples from these posterior distributions are in excellent agreement with the observed rates of local relapse used in the Bayesian inference. The most probable values of the model parameters also agree well with maximum likelihood estimates. Conclusion: A robust method of performing Bayesian inference of TCP data using a complex TCP model has been established.« less

  4. Stereotactic probability and variability of speech arrest and anomia sites during stimulation mapping of the language dominant hemisphere.

    PubMed

    Chang, Edward F; Breshears, Jonathan D; Raygor, Kunal P; Lau, Darryl; Molinaro, Annette M; Berger, Mitchel S

    2017-01-01

    OBJECTIVE Functional mapping using direct cortical stimulation is the gold standard for the prevention of postoperative morbidity during resective surgery in dominant-hemisphere perisylvian regions. Its role is necessitated by the significant interindividual variability that has been observed for essential language sites. The aim in this study was to determine the statistical probability distribution of eliciting aphasic errors for any given stereotactically based cortical position in a patient cohort and to quantify the variability at each cortical site. METHODS Patients undergoing awake craniotomy for dominant-hemisphere primary brain tumor resection between 1999 and 2014 at the authors' institution were included in this study, which included counting and picture-naming tasks during dense speech mapping via cortical stimulation. Positive and negative stimulation sites were collected using an intraoperative frameless stereotactic neuronavigation system and were converted to Montreal Neurological Institute coordinates. Data were iteratively resampled to create mean and standard deviation probability maps for speech arrest and anomia. Patients were divided into groups with a "classic" or an "atypical" location of speech function, based on the resultant probability maps. Patient and clinical factors were then assessed for their association with an atypical location of speech sites by univariate and multivariate analysis. RESULTS Across 102 patients undergoing speech mapping, the overall probabilities of speech arrest and anomia were 0.51 and 0.33, respectively. Speech arrest was most likely to occur with stimulation of the posterior inferior frontal gyrus (maximum probability from individual bin = 0.025), and variance was highest in the dorsal premotor cortex and the posterior superior temporal gyrus. In contrast, stimulation within the posterior perisylvian cortex resulted in the maximum mean probability of anomia (maximum probability = 0.012), with large variance in the regions surrounding the posterior superior temporal gyrus, including the posterior middle temporal, angular, and supramarginal gyri. Patients with atypical speech localization were far more likely to have tumors in canonical Broca's or Wernicke's areas (OR 7.21, 95% CI 1.67-31.09, p < 0.01) or to have multilobar tumors (OR 12.58, 95% CI 2.22-71.42, p < 0.01), than were patients with classic speech localization. CONCLUSIONS This study provides statistical probability distribution maps for aphasic errors during cortical stimulation mapping in a patient cohort. Thus, the authors provide an expected probability of inducing speech arrest and anomia from specific 10-mm 2 cortical bins in an individual patient. In addition, they highlight key regions of interindividual mapping variability that should be considered preoperatively. They believe these results will aid surgeons in their preoperative planning of eloquent cortex resection.

  5. Bayesian approach to inverse statistical mechanics.

    PubMed

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  6. Bayesian approach to inverse statistical mechanics

    NASA Astrophysics Data System (ADS)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  7. Back to Normal! Gaussianizing posterior distributions for cosmological probes

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2014-05-01

    We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.

  8. A three-step maximum a posteriori probability method for InSAR data inversion of coseismic rupture with application to the 14 April 2010 Mw 6.9 Yushu, China, earthquake

    NASA Astrophysics Data System (ADS)

    Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei

    2013-08-01

    develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.

  9. Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.

    PubMed

    Li, Yuhong; Jia, Fucang; Qin, Jing

    2016-10-01

    Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Estimation from incomplete multinomial data. Ph.D. Thesis - Harvard Univ.

    NASA Technical Reports Server (NTRS)

    Credeur, K. R.

    1978-01-01

    The vector of multinomial cell probabilities was estimated from incomplete data, incomplete in that it contains partially classified observations. Each such partially classified observation was observed to fall in one of two or more selected categories but was not classified further into a single category. The data were assumed to be incomplete at random. The estimation criterion was minimization of risk for quadratic loss. The estimators were the classical maximum likelihood estimate, the Bayesian posterior mode, and the posterior mean. An approximation was developed for the posterior mean. The Dirichlet, the conjugate prior for the multinomial distribution, was assumed for the prior distribution.

  11. Posterior probability of linkage and maximal lod score.

    PubMed

    Génin, E; Martinez, M; Clerget-Darpoux, F

    1995-01-01

    To detect linkage between a trait and a marker, Morton (1955) proposed to calculate the lod score z(theta 1) at a given value theta 1 of the recombination fraction. If z(theta 1) reaches +3 then linkage is concluded. However, in practice, lod scores are calculated for different values of the recombination fraction between 0 and 0.5 and the test is based on the maximum value of the lod score Zmax. The impact of this deviation of the test on the probability that in fact linkage does not exist, when linkage was concluded, is documented here. This posterior probability of no linkage can be derived by using Bayes' theorem. It is less than 5% when the lod score at a predetermined theta 1 is used for the test. But, for a Zmax of +3, we showed that it can reach 16.4%. Thus, considering a composite alternative hypothesis instead of a single one decreases the reliability of the test. The reliability decreases rapidly when Zmax is less than +3. Given a Zmax of +2.5, there is a 33% chance that linkage does not exist. Moreover, the posterior probability depends not only on the value of Zmax but also jointly on the family structures and on the genetic model. For a given Zmax, the chance that linkage exists may then vary.

  12. Seismic imaging of Q structures by a trans-dimensional coda-wave analysis

    NASA Astrophysics Data System (ADS)

    Takahashi, Tsutomu

    2017-04-01

    Wave scattering and intrinsic attenuation are important processes to describe incoherent and complex wave trains of high frequency seismic wave (>1Hz). The multiple lapse time window analysis (MLTWA) has been used to estimate scattering and intrinsic Q values by assuming constant Q in a study area (e.g., Hoshiba 1993). This study generalizes this MLTWA to estimate lateral variations of Q values under the Bayesian framework in dimension variable space. Study area is partitioned into small areas by means of the Voronoi tessellation. Scattering and intrinsic Q in each small area are constant. We define a misfit function for spatiotemporal variations of wave energy as with the original MLTWA, and maximize the posterior probability with changing not only Q values but the number and spatial layout of the Voronoi cells. This maximization is conducted by means of the reversible jump Markov chain Monte Carlo (rjMCMC) (Green 1995) since the number of unknown parameters (i.e., dimension of posterior probability) is variable. After a convergence to the maximum posterior, we estimate Q structures from the ensemble averages of MCMC samples around the maximum posterior probability. Synthetic tests showed stable reconstructions of input structures with reasonable error distributions. We applied this method for seismic waveform data recorded by ocean bottom seismograms at the outer-rise area off Tohoku, and estimated Q values at 4-8Hz, 8-16Hz and 16-32Hz. Intrinsic Q are nearly constant at all frequency bands, and scattering Q shows two distinct strong scattering regions at petit spot area and high seismicity area. These strong scattering are probably related to magma inclusions and fractured structure, respectively. Difference between these two areas becomes clear at high frequencies. It means that scale dependences of inhomogeneities or smaller scale inhomogeneity is important to discuss medium property and origins of structural variations. While the generalized MLTWA is based on a classical waveform modeling in constant Q medium, this method can be a fundamental basis for Q structure imaging in the crust.

  13. Entropic Inference

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel

    2011-03-01

    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.

  14. Nested Sampling for Bayesian Model Comparison in the Context of Salmonella Disease Dynamics

    PubMed Central

    Dybowski, Richard; McKinley, Trevelyan J.; Mastroeni, Pietro; Restif, Olivier

    2013-01-01

    Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike's Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered. PMID:24376528

  15. Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-04-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.

  16. The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions

    PubMed Central

    Larget, Bret

    2013-01-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066

  17. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  18. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  19. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data.

    PubMed

    O'Reilly, Joseph E; Donoghue, Philip C J

    2018-03-01

    Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data.

  20. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data

    PubMed Central

    O’Reilly, Joseph E; Donoghue, Philip C J

    2018-01-01

    Abstract Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data. PMID:29106675

  1. The estimation of tree posterior probabilities using conditional clade probability distributions.

    PubMed

    Larget, Bret

    2013-07-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.

  2. The utility of Bayesian predictive probabilities for interim monitoring of clinical trials

    PubMed Central

    Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn

    2014-01-01

    Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363

  3. RadVel: General toolkit for modeling Radial Velocities

    NASA Astrophysics Data System (ADS)

    Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan

    2018-01-01

    RadVel models Keplerian orbits in radial velocity (RV) time series. The code is written in Python with a fast Kepler's equation solver written in C. It provides a framework for fitting RVs using maximum a posteriori optimization and computing robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel can perform Bayesian model comparison and produces publication quality plots and LaTeX tables.

  4. A Bayesian Framework of Uncertainties Integration in 3D Geological Model

    NASA Astrophysics Data System (ADS)

    Liang, D.; Liu, X.

    2017-12-01

    3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.

  5. Bayesian structural inference for hidden processes.

    PubMed

    Strelioff, Christopher C; Crutchfield, James P

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ε-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ε-machines, irrespective of estimated transition probabilities. Properties of ε-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  6. Bayesian structural inference for hidden processes

    NASA Astrophysics Data System (ADS)

    Strelioff, Christopher C.; Crutchfield, James P.

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  7. Multiclass Posterior Probability Twin SVM for Motor Imagery EEG Classification.

    PubMed

    She, Qingshan; Ma, Yuliang; Meng, Ming; Luo, Zhizeng

    2015-01-01

    Motor imagery electroencephalography is widely used in the brain-computer interface systems. Due to inherent characteristics of electroencephalography signals, accurate and real-time multiclass classification is always challenging. In order to solve this problem, a multiclass posterior probability solution for twin SVM is proposed by the ranking continuous output and pairwise coupling in this paper. First, two-class posterior probability model is constructed to approximate the posterior probability by the ranking continuous output techniques and Platt's estimating method. Secondly, a solution of multiclass probabilistic outputs for twin SVM is provided by combining every pair of class probabilities according to the method of pairwise coupling. Finally, the proposed method is compared with multiclass SVM and twin SVM via voting, and multiclass posterior probability SVM using different coupling approaches. The efficacy on the classification accuracy and time complexity of the proposed method has been demonstrated by both the UCI benchmark datasets and real world EEG data from BCI Competition IV Dataset 2a, respectively.

  8. PARTS: Probabilistic Alignment for RNA joinT Secondary structure prediction

    PubMed Central

    Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H.

    2008-01-01

    A novel method is presented for joint prediction of alignment and common secondary structures of two RNA sequences. The joint consideration of common secondary structures and alignment is accomplished by structural alignment over a search space defined by the newly introduced motif called matched helical regions. The matched helical region formulation generalizes previously employed constraints for structural alignment and thereby better accommodates the structural variability within RNA families. A probabilistic model based on pseudo free energies obtained from precomputed base pairing and alignment probabilities is utilized for scoring structural alignments. Maximum a posteriori (MAP) common secondary structures, sequence alignment and joint posterior probabilities of base pairing are obtained from the model via a dynamic programming algorithm called PARTS. The advantage of the more general structural alignment of PARTS is seen in secondary structure predictions for the RNase P family. For this family, the PARTS MAP predictions of secondary structures and alignment perform significantly better than prior methods that utilize a more restrictive structural alignment model. For the tRNA and 5S rRNA families, the richer structural alignment model of PARTS does not offer a benefit and the method therefore performs comparably with existing alternatives. For all RNA families studied, the posterior probability estimates obtained from PARTS offer an improvement over posterior probability estimates from a single sequence prediction. When considering the base pairings predicted over a threshold value of confidence, the combination of sensitivity and positive predictive value is superior for PARTS than for the single sequence prediction. PARTS source code is available for download under the GNU public license at http://rna.urmc.rochester.edu. PMID:18304945

  9. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  10. Efficiency of nuclear and mitochondrial markers recovering and supporting known amniote groups.

    PubMed

    Lambret-Frotté, Julia; Perini, Fernando Araújo; de Moraes Russo, Claudia Augusta

    2012-01-01

    We have analysed the efficiency of all mitochondrial protein coding genes and six nuclear markers (Adora3, Adrb2, Bdnf, Irbp, Rag2 and Vwf) in reconstructing and statistically supporting known amniote groups (murines, rodents, primates, eutherians, metatherians, therians). The efficiencies of maximum likelihood, Bayesian inference, maximum parsimony, neighbor-joining and UPGMA were also evaluated, by assessing the number of correct and incorrect recovered groupings. In addition, we have compared support values using the conservative bootstrap test and the Bayesian posterior probabilities. First, no correlation was observed between gene size and marker efficiency in recovering or supporting correct nodes. As expected, tree-building methods performed similarly, even UPGMA that, in some cases, outperformed other most extensively used methods. Bayesian posterior probabilities tend to show much higher support values than the conservative bootstrap test, for correct and incorrect nodes. Our results also suggest that nuclear markers do not necessarily show a better performance than mitochondrial genes. The so-called dependency among mitochondrial markers was not observed comparing genome performances. Finally, the amniote groups with lowest recovery rates were therians and rodents, despite the morphological support for their monophyletic status. We suggest that, regardless of the tree-building method, a few carefully selected genes are able to unfold a detailed and robust scenario of phylogenetic hypotheses, particularly if taxon sampling is increased.

  11. RadVel: The Radial Velocity Modeling Toolkit

    NASA Astrophysics Data System (ADS)

    Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan

    2018-04-01

    RadVel is an open-source Python package for modeling Keplerian orbits in radial velocity (RV) timeseries. RadVel provides a convenient framework to fit RVs using maximum a posteriori optimization and to compute robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel allows users to float or fix parameters, impose priors, and perform Bayesian model comparison. We have implemented real-time MCMC convergence tests to ensure adequate sampling of the posterior. RadVel can output a number of publication-quality plots and tables. Users may interface with RadVel through a convenient command-line interface or directly from Python. The code is object-oriented and thus naturally extensible. We encourage contributions from the community. Documentation is available at http://radvel.readthedocs.io.

  12. Efficient marginalization to compute protein posterior probabilities from shotgun mass spectrometry data

    PubMed Central

    Serang, Oliver; MacCoss, Michael J.; Noble, William Stafford

    2010-01-01

    The problem of identifying proteins from a shotgun proteomics experiment has not been definitively solved. Identifying the proteins in a sample requires ranking them, ideally with interpretable scores. In particular, “degenerate” peptides, which map to multiple proteins, have made such a ranking difficult to compute. The problem of computing posterior probabilities for the proteins, which can be interpreted as confidence in a protein’s presence, has been especially daunting. Previous approaches have either ignored the peptide degeneracy problem completely, addressed it by computing a heuristic set of proteins or heuristic posterior probabilities, or by estimating the posterior probabilities with sampling methods. We present a probabilistic model for protein identification in tandem mass spectrometry that recognizes peptide degeneracy. We then introduce graph-transforming algorithms that facilitate efficient computation of protein probabilities, even for large data sets. We evaluate our identification procedure on five different well-characterized data sets and demonstrate our ability to efficiently compute high-quality protein posteriors. PMID:20712337

  13. Statistics of Sxy estimates

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Pawka, S. S.

    1987-01-01

    The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.

  14. Robust Bayesian Experimental Design for Conceptual Model Discrimination

    NASA Astrophysics Data System (ADS)

    Pham, H. V.; Tsai, F. T. C.

    2015-12-01

    A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.

  15. A Bayesian pick-the-winner design in a randomized phase II clinical trial.

    PubMed

    Chen, Dung-Tsa; Huang, Po-Yu; Lin, Hui-Yi; Chiappori, Alberto A; Gabrilovich, Dmitry I; Haura, Eric B; Antonia, Scott J; Gray, Jhanelle E

    2017-10-24

    Many phase II clinical trials evaluate unique experimental drugs/combinations through multi-arm design to expedite the screening process (early termination of ineffective drugs) and to identify the most effective drug (pick the winner) to warrant a phase III trial. Various statistical approaches have been developed for the pick-the-winner design but have been criticized for lack of objective comparison among the drug agents. We developed a Bayesian pick-the-winner design by integrating a Bayesian posterior probability with Simon two-stage design in a randomized two-arm clinical trial. The Bayesian posterior probability, as the rule to pick the winner, is defined as probability of the response rate in one arm higher than in the other arm. The posterior probability aims to determine the winner when both arms pass the second stage of the Simon two-stage design. When both arms are competitive (i.e., both passing the second stage), the Bayesian posterior probability performs better to correctly identify the winner compared with the Fisher exact test in the simulation study. In comparison to a standard two-arm randomized design, the Bayesian pick-the-winner design has a higher power to determine a clear winner. In application to two studies, the approach is able to perform statistical comparison of two treatment arms and provides a winner probability (Bayesian posterior probability) to statistically justify the winning arm. We developed an integrated design that utilizes Bayesian posterior probability, Simon two-stage design, and randomization into a unique setting. It gives objective comparisons between the arms to determine the winner.

  16. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  17. HELP: XID+, the probabilistic de-blender for Herschel SPIRE maps

    NASA Astrophysics Data System (ADS)

    Hurley, P. D.; Oliver, S.; Betancourt, M.; Clarke, C.; Cowley, W. I.; Duivenvoorden, S.; Farrah, D.; Griffin, M.; Lacey, C.; Le Floc'h, E.; Papadopoulos, A.; Sargent, M.; Scudder, J. M.; Vaccari, M.; Valtchanov, I.; Wang, L.

    2017-01-01

    We have developed a new prior-based source extraction tool, XID+, to carry out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. XID+ is developed using a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates. In this paper, we discuss the details of XID+ and demonstrate the basic capabilities and performance by running it on simulated SPIRE maps resembling the COSMOS field, and comparing to the current prior-based source extraction tool DESPHOT. Not only we show that XID+ performs better on metrics such as flux accuracy and flux uncertainty accuracy, but we also illustrate how obtaining the posterior probability distribution can help overcome some of the issues inherent with maximum-likelihood-based source extraction routines. We run XID+ on the COSMOS SPIRE maps from Herschel Multi-Tiered Extragalactic Survey using a 24-μm catalogue as a positional prior, and a uniform flux prior ranging from 0.01 to 1000 mJy. We show the marginalized SPIRE colour-colour plot and marginalized contribution to the cosmic infrared background at the SPIRE wavelengths. XID+ is a core tool arising from the Herschel Extragalactic Legacy Project (HELP) and we discuss how additional work within HELP providing prior information on fluxes can and will be utilized. The software is available at https://github.com/H-E-L-P/XID_plus. We also provide the data product for COSMOS. We believe this is the first time that the full posterior probability of galaxy photometry has been provided as a data product.

  18. A computer program for estimation from incomplete multinomial data

    NASA Technical Reports Server (NTRS)

    Credeur, K. R.

    1978-01-01

    Coding is given for maximum likelihood and Bayesian estimation of the vector p of multinomial cell probabilities from incomplete data. Also included is coding to calculate and approximate elements of the posterior mean and covariance matrices. The program is written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system (NOS) 1.1. The program requires approximately 44000 octal locations of core storage. A typical case requires from 72 seconds to 92 seconds on CYBER 175 depending on the value of the prior parameter.

  19. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  20. Nonlinear Demodulation and Channel Coding in EBPSK Scheme

    PubMed Central

    Chen, Xianqing; Wu, Lenan

    2012-01-01

    The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding. PMID:23213281

  1. Nonlinear demodulation and channel coding in EBPSK scheme.

    PubMed

    Chen, Xianqing; Wu, Lenan

    2012-01-01

    The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding.

  2. Gaussianization for fast and accurate inference from cosmological data

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2016-06-01

    We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.

  3. Using Latent Class Analysis to Model Temperament Types.

    PubMed

    Loken, Eric

    2004-10-01

    Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks was used for model selection. Results show at least three types of infant temperament, with patterns consistent with those identified by previous researchers who classified the infants using a theoretically based system. Multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation. Latent class membership at four months of age predicted longitudinal outcomes at four years of age. The example illustrates issues relevant to all mixture models, including estimation, multi-modality, model selection, and comparisons based on the latent group indicators.

  4. Bayesian inference based on stationary Fokker-Planck sampling.

    PubMed

    Berrones, Arturo

    2010-06-01

    A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.

  5. Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-07-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.

  6. Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.

    PubMed

    Farsani, Zahra Amini; Schmid, Volker J

    2017-01-01

    In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.

  7. Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test

    ERIC Educational Resources Information Center

    Ho, Tsung-Han; Dodd, Barbara G.

    2012-01-01

    In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…

  8. Bayes factor and posterior probability: Complementary statistical evidence to p-value.

    PubMed

    Lin, Ruitao; Yin, Guosheng

    2015-09-01

    As a convention, a p-value is often computed in hypothesis testing and compared with the nominal level of 0.05 to determine whether to reject the null hypothesis. Although the smaller the p-value, the more significant the statistical test, it is difficult to perceive the p-value in a probability scale and quantify it as the strength of the data against the null hypothesis. In contrast, the Bayesian posterior probability of the null hypothesis has an explicit interpretation of how strong the data support the null. We make a comparison of the p-value and the posterior probability by considering a recent clinical trial. The results show that even when we reject the null hypothesis, there is still a substantial probability (around 20%) that the null is true. Not only should we examine whether the data would have rarely occurred under the null hypothesis, but we also need to know whether the data would be rare under the alternative. As a result, the p-value only provides one side of the information, for which the Bayes factor and posterior probability may offer complementary evidence. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.

  10. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  11. Bayesian soft X-ray tomography using non-stationary Gaussian Processes.

    PubMed

    Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  12. Learning about Posterior Probability: Do Diagrams and Elaborative Interrogation Help?

    ERIC Educational Resources Information Center

    Clinton, Virginia; Alibali, Martha Wagner; Nathan, Mitchel J.

    2016-01-01

    To learn from a text, students must make meaningful connections among related ideas in that text. This study examined the effectiveness of two methods of improving connections--elaborative interrogation and diagrams--in written lessons about posterior probability. Undergraduate students (N = 198) read a lesson in one of three questioning…

  13. Midline shift and lateral guidance angle in adults with unilateral posterior crossbite.

    PubMed

    Rilo, Benito; da Silva, José Luis; Mora, María Jesús; Cadarso-Suárez, Carmen; Santana, Urbano

    2008-06-01

    Unilateral posterior crossbite is a malocclusion that, if not corrected during infancy, typically causes permanent asymmetry. Our aims in this study were to evaluate various occlusal parameters in a group of adults with uncorrected unilateral posterior crossbite and to compare findings with those obtained in a group of normal subjects. Midline shift at maximum intercuspation, midline shift at maximum aperture, and lateral guidance angle in the frontal plane were assessed in 25 adults (ages, 17-26 years; mean, 19.6 years) with crossbites. Midline shift at maximum intercuspation was zero (ie, centric midline) in 36% of the crossbite subjects; the remaining subjects had a shift toward the crossbite side. Midline shift at maximum aperture had no association with crossbite side. Lateral guidance angle was lower on the crossbite side than on the noncrossbite side. No parameter studied showed significant differences with respect to the normal subjects. Adults with unilateral posterior crossbite have adaptations that compensate for the crossbite and maintain normal function.

  14. INFERRING THE ECCENTRICITY DISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogg, David W.; Bovy, Jo; Myers, Adam D., E-mail: david.hogg@nyu.ed

    2010-12-20

    Standard maximum-likelihood estimators for binary-star and exoplanet eccentricities are biased high, in the sense that the estimated eccentricity tends to be larger than the true eccentricity. As with most non-trivial observables, a simple histogram of estimated eccentricities is not a good estimate of the true eccentricity distribution. Here, we develop and test a hierarchical probabilistic method for performing the relevant meta-analysis, that is, inferring the true eccentricity distribution, taking as input the likelihood functions for the individual star eccentricities, or samplings of the posterior probability distributions for the eccentricities (under a given, uninformative prior). The method is a simple implementationmore » of a hierarchical Bayesian model; it can also be seen as a kind of heteroscedastic deconvolution. It can be applied to any quantity measured with finite precision-other orbital parameters, or indeed any astronomical measurements of any kind, including magnitudes, distances, or photometric redshifts-so long as the measurements have been communicated as a likelihood function or a posterior sampling.« less

  15. A Comparison of Item Selection Techniques for Testlets

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.

    2010-01-01

    This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…

  16. An efficient algorithm to compute marginal posterior genotype probabilities for every member of a pedigree with loops

    PubMed Central

    2009-01-01

    Background Marginal posterior genotype probabilities need to be computed for genetic analyses such as geneticcounseling in humans and selective breeding in animal and plant species. Methods In this paper, we describe a peeling based, deterministic, exact algorithm to compute efficiently genotype probabilities for every member of a pedigree with loops without recourse to junction-tree methods from graph theory. The efficiency in computing the likelihood by peeling comes from storing intermediate results in multidimensional tables called cutsets. Computing marginal genotype probabilities for individual i requires recomputing the likelihood for each of the possible genotypes of individual i. This can be done efficiently by storing intermediate results in two types of cutsets called anterior and posterior cutsets and reusing these intermediate results to compute the likelihood. Examples A small example is used to illustrate the theoretical concepts discussed in this paper, and marginal genotype probabilities are computed at a monogenic disease locus for every member in a real cattle pedigree. PMID:19958551

  17. Cumulative probability of neodymium: YAG laser posterior capsulotomy after phacoemulsification.

    PubMed

    Ando, Hiroshi; Ando, Nobuyo; Oshika, Tetsuro

    2003-11-01

    To retrospectively analyze the cumulative probability of neodymium:YAG (Nd:YAG) laser posterior capsulotomy after phacoemulsification and to evaluate the risk factors. Ando Eye Clinic, Kanagawa, Japan. In 3997 eyes that had phacoemulsification with an intact continuous curvilinear capsulorhexis, the cumulative probability of posterior capsulotomy was computed by Kaplan-Meier survival analysis and risk factors were analyzed using the Cox proportional hazards regression model. The variables tested were sex; age; type of cataract; preoperative best corrected visual acuity (BCVA); presence of diabetes mellitus, diabetic retinopathy, or retinitis pigmentosa; type of intraocular lens (IOL); and the year the operation was performed. The IOLs were categorized as 3-piece poly(methyl methacrylate) (PMMA), 1-piece PMMA, 3-piece silicone, and acrylic foldable. The cumulative probability of capsulotomy after cataract surgery was 1.95%, 18.50%, and 32.70% at 1, 3, and 5 years, respectively. Positive risk factors included a better preoperative BCVA (P =.0005; risk ratio [RR], 1.7; 95% confidence interval [CI], 1.3-2.5) and the presence of retinitis pigmentosa (P<.0001; RR, 6.6; 95% CI, 3.7-11.6). Women had a significantly greater probability of Nd:YAG laser posterior capsulotomy (P =.016; RR, 1.4; 95% CI, 1.1-1.8). The type of IOL was significantly related to the probability of Nd:YAG laser capsulotomy, with the foldable acrylic IOL having a significantly lower probability of capsulotomy. The 1-piece PMMA IOL had a significantly higher risk than 3-piece PMMA and 3-piece silicone IOLs. The probability of Nd:YAG laser capsulotomy was higher in women, in eyes with a better preoperative BCVA, and in patients with retinitis pigmentosa. The foldable acrylic IOL had a significantly lower probability of capsulotomy.

  18. Reweighting Data in the Spirit of Tukey: Using Bayesian Posterior Probabilities as Rasch Residuals for Studying Misfit

    ERIC Educational Resources Information Center

    Dardick, William R.; Mislevy, Robert J.

    2016-01-01

    A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…

  19. Learn-as-you-go acceleration of cosmological parameter estimates

    NASA Astrophysics Data System (ADS)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C.

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.

  20. Learn-as-you-go acceleration of cosmological parameter estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C., E-mail: g.aslanyan@auckland.ac.nz, E-mail: r.easther@auckland.ac.nz, E-mail: lpri691@aucklanduni.ac.nz

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitlymore » describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.« less

  1. Multiple model cardinalized probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  2. Asking better questions: How presentation formats influence information search.

    PubMed

    Wu, Charley M; Meder, Björn; Filimon, Flavia; Nelson, Jonathan D

    2017-08-01

    While the influence of presentation formats have been widely studied in Bayesian reasoning tasks, we present the first systematic investigation of how presentation formats influence information search decisions. Four experiments were conducted across different probabilistic environments, where subjects (N = 2,858) chose between 2 possible search queries, each with binary probabilistic outcomes, with the goal of maximizing classification accuracy. We studied 14 different numerical and visual formats for presenting information about the search environment, constructed across 6 design features that have been prominently related to improvements in Bayesian reasoning accuracy (natural frequencies, posteriors, complement, spatial extent, countability, and part-to-whole information). The posterior variants of the icon array and bar graph formats led to the highest proportion of correct responses, and were substantially better than the standard probability format. Results suggest that presenting information in terms of posterior probabilities and visualizing natural frequencies using spatial extent (a perceptual feature) were especially helpful in guiding search decisions, although environments with a mixture of probabilistic and certain outcomes were challenging across all formats. Subjects who made more accurate probability judgments did not perform better on the search task, suggesting that simple decision heuristics may be used to make search decisions without explicitly applying Bayesian inference to compute probabilities. We propose a new take-the-difference (TTD) heuristic that identifies the accuracy-maximizing query without explicit computation of posterior probabilities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Hepatitis disease detection using Bayesian theory

    NASA Astrophysics Data System (ADS)

    Maseleno, Andino; Hidayati, Rohmah Zahroh

    2017-02-01

    This paper presents hepatitis disease diagnosis using a Bayesian theory for better understanding of the theory. In this research, we used a Bayesian theory for detecting hepatitis disease and displaying the result of diagnosis process. Bayesian algorithm theory is rediscovered and perfected by Laplace, the basic idea is using of the known prior probability and conditional probability density parameter, based on Bayes theorem to calculate the corresponding posterior probability, and then obtained the posterior probability to infer and make decisions. Bayesian methods combine existing knowledge, prior probabilities, with additional knowledge derived from new data, the likelihood function. The initial symptoms of hepatitis which include malaise, fever and headache. The probability of hepatitis given the presence of malaise, fever, and headache. The result revealed that a Bayesian theory has successfully identified the existence of hepatitis disease.

  4. Judging the Probability of Hypotheses Versus the Impact of Evidence: Which Form of Inductive Inference Is More Accurate and Time-Consistent?

    PubMed

    Tentori, Katya; Chater, Nick; Crupi, Vincenzo

    2016-04-01

    Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than probability judgments. Impact judgments also predict the direction of errors in probability judgments. These findings suggest that human inductive reasoning relies more on estimating evidential impact than on posterior probability. Copyright © 2015 Cognitive Science Society, Inc.

  5. The influence of age, sex, bulb position, visual feedback, and the order of testing on maximum anterior and posterior tongue strength and endurance in healthy belgian adults.

    PubMed

    Vanderwegen, Jan; Guns, Cindy; Van Nuffelen, Gwen; Elen, Rik; De Bodt, Marc

    2013-06-01

    This study collected data on the maximum anterior and posterior tongue strength and endurance in 420 healthy Belgians across the adult life span to explore the influence of age, sex, bulb position, visual feedback, and order of testing. Measures were obtained using the Iowa Oral Performance Instrument (IOPI). Older participants (more than 70 years old) demonstrated significantly lower strength than younger persons at the anterior and the posterior tongue. Endurance remains stable throughout the major part of life. Gender influence remains significant but minor throughout life, with males showing higher pressures and longer endurance. The anterior part of the tongue has both higher strength and longer endurance than the posterior part. Mean maximum tongue pressures in this European population seem to be lower than American values and are closer to Asian results. The normative data can be used for objective assessment of tongue weakness and subsequent therapy planning of dysphagic patients.

  6. Classification with spatio-temporal interpixel class dependency contexts

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, David A.

    1992-01-01

    A contextual classifier which can utilize both spatial and temporal interpixel dependency contexts is investigated. After spatial and temporal neighbors are defined, a general form of maximum a posterior spatiotemporal contextual classifier is derived. This contextual classifier is simplified under several assumptions. Joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Gibbs random field. The classification is performed in a recursive manner to allow a computationally efficient contextual classification. Experimental results with bitemporal TM data show significant improvement of classification accuracy over noncontextual pixelwise classifiers. This spatiotemporal contextual classifier should find use in many applications of remote sensing, especially when the classification accuracy is important.

  7. Feasibility study of direct spectra measurements for Thomson scattered signals for KSTAR fusion-grade plasmas

    NASA Astrophysics Data System (ADS)

    Park, K.-R.; Kim, K.-h.; Kwak, S.; Svensson, J.; Lee, J.; Ghim, Y.-c.

    2017-11-01

    Feasibility study of direct spectra measurements of Thomson scattered photons for fusion-grade plasmas is performed based on a forward model of the KSTAR Thomson scattering system. Expected spectra in the forward model are calculated based on Selden function including the relativistic polarization correction. Noise in the signal is modeled with photon noise and Gaussian electrical noise. Electron temperature and density are inferred using Bayesian probability theory. Based on bias error, full width at half maximum and entropy of posterior distributions, spectral measurements are found to be feasible. Comparisons between spectrometer-based and polychromator-based Thomson scattering systems are performed with varying quantum efficiency and electrical noise levels.

  8. Unpredictability of soft tissue changes after camouflage treatment of Class II division 1 malocclusion with maximum anterior retraction using miniscrews.

    PubMed

    Kim, Kayoung; Choi, Sung-Hwan; Choi, Eun-Hee; Choi, Yoon-Jeong; Hwang, Chung-Ju; Cha, Jung-Yul

    2017-03-01

    To compare soft and hard tissue responses based on the degree of maxillary incisor retraction using maximum anchorage in patients with Class II division 1 malocclusion. This retrospective study sample was divided into moderate retraction (<8.0 mm; n = 28) and maximum retraction (≥8.0 mm; n = 29) groups based on the amount of maxillary incisor retraction after extraction of the maxillary and mandibular first premolars for camouflage treatment. Pre- and posttreatment lateral cephalograms were analyzed. There were 2.3 mm and 3.0 mm of upper and lower lip retraction, respectively, in the moderate group; and 4.0 mm and 5.3 mm, respectively, in the maximum group. In the moderate group, the upper lip was most influenced by posterior movement of the cervical point of the maxillary incisor (β = 0.94). The lower lip was most influenced by posterior movement of B-point (β = 0.84) and the cervical point of the mandibular incisor (β = 0.83). Prediction was difficult in the maximum group; no variable showed a significant influence on upper lip changes. The lower lip was highly influenced by posterior movement of the cervical point of the maxillary incisor (β = 0.50), but this correlation was weak in the maximum group. Posterior movement of the cervical point of the anterior teeth is necessary for increased lip retraction. However, periodic evaluation of the lip profile is needed during maximum retraction of the anterior teeth because of limitations in predicting soft tissue responses.

  9. Estimating the Probability of Traditional Copying, Conditional on Answer-Copying Statistics.

    PubMed

    Allen, Jeff; Ghattas, Andrew

    2016-06-01

    Statistics for detecting copying on multiple-choice tests produce p values measuring the probability of a value at least as large as that observed, under the null hypothesis of no copying. The posterior probability of copying is arguably more relevant than the p value, but cannot be derived from Bayes' theorem unless the population probability of copying and probability distribution of the answer-copying statistic under copying are known. In this article, the authors develop an estimator for the posterior probability of copying that is based on estimable quantities and can be used with any answer-copying statistic. The performance of the estimator is evaluated via simulation, and the authors demonstrate how to apply the formula using actual data. Potential uses, generalizability to other types of cheating, and limitations of the approach are discussed.

  10. Preserving the PCL during the tibial cut in total knee arthroplasty.

    PubMed

    Cinotti, G; Sessa, P; Amato, M; Ripani, F R; Giannicola, G

    2017-08-01

    Previous studies have shown that the PCL insertion may be damaged during the tibial cut performed in total knee arthroplasty. We investigated the maximum thickness of a tibial cut that preserves the PCL insertion and to what extent the posterior slope of the tibial cut and that of the patient's tibial plateaus affect the outcome. MR images of 83 knees were analysed. The maximum thickness of a tibial cut that preserves the PCL using a posterior slope of 0°, 3°, 5° and parallel to the patient's slope of the tibial plateau, was evaluated. Correlations between the results and the degrees of the posterior slope of the patient's tibial plateaus were also investigated. The maximum thickness of a tibial cut that preserves the entire PCL insertion was, on average, 5.5, 4.7, 4.2 and 3.1 mm when a posterior slope of 0°, 3°, 5° and parallel to the patients' tibial plateaus was used, respectively. When the 25th percentile was considered, the maximum thickness of a tibial cut that preserved the PCL was 4 and 3 mm with a tibial cut of 0° and 5° of posterior slope, respectively. The maximum thickness of a tibial cut that preserved the PCL was significantly greater in patients with a sagittal slope of the tibial plateaus more than 8° than in those with a sagittal slope less than 8°. In cruciate retaining implants, the PCL insertion may be spared in the majority of patients by performing a tibial cut of 4 mm, or even less when a posterior slope of 3°-5° is used. The clinical relevance of our study is that the execution of a conservative tibial cut, followed by a second tibial resection to achieve the thickness required for the tibial component to be implanted, may be an alternative technique to spare the PCL in CR TKA. II.

  11. Naive Probability: A Mental Model Theory of Extensional Reasoning.

    ERIC Educational Resources Information Center

    Johnson-Laird, P. N.; Legrenzi, Paolo; Girotto, Vittorio; Legrenzi, Maria Sonino; Caverni, Jean-Paul

    1999-01-01

    Outlines a theory of naive probability in which individuals who are unfamiliar with the probability calculus can infer the probabilities of events in an "extensional" way. The theory accommodates reasoning based on numerical premises, and explains how naive reasoners can infer posterior probabilities without relying on Bayes's theorem.…

  12. Inference of reaction rate parameters based on summary statistics from experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  13. Inference of reaction rate parameters based on summary statistics from experiments

    DOE PAGES

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...

    2016-10-15

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  14. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  15. The lod score method.

    PubMed

    Rice, J P; Saccone, N L; Corbett, J

    2001-01-01

    The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.

  16. Calibration of micromechanical parameters for DEM simulations by using the particle filter

    NASA Astrophysics Data System (ADS)

    Cheng, Hongyang; Shuku, Takayuki; Thoeni, Klaus; Yamamoto, Haruyuki

    2017-06-01

    The calibration of DEM models is typically accomplished by trail and error. However, the procedure lacks of objectivity and has several uncertainties. To deal with these issues, the particle filter is employed as a novel approach to calibrate DEM models of granular soils. The posterior probability distribution of the microparameters that give numerical results in good agreement with the experimental response of a Toyoura sand specimen is approximated by independent model trajectories, referred as `particles', based on Monte Carlo sampling. The soil specimen is modeled by polydisperse packings with different numbers of spherical grains. Prepared in `stress-free' states, the packings are subjected to triaxial quasistatic loading. Given the experimental data, the posterior probability distribution is incrementally updated, until convergence is reached. The resulting `particles' with higher weights are identified as the calibration results. The evolutions of the weighted averages and posterior probability distribution of the micro-parameters are plotted to show the advantage of using a particle filter, i.e., multiple solutions are identified for each parameter with known probabilities of reproducing the experimental response.

  17. Mineral in skeletal elements of the terrestrial crustacean Porcellio scaber: SRμCT of function related distribution and changes during the moult cycle.

    PubMed

    Ziegler, Andreas; Neues, Frank; Janáček, Jiří; Beckmann, Felix; Epple, Matthias

    2017-01-01

    Terrestrial isopods moult first the posterior and then the anterior half of the body, allowing for storage and recycling of CaCO 3 . We used synchrotron-radiation microtomography to estimate mineral content within skeletal segments in sequential moulting stages of Porcellio scaber. The results suggest that all examined cuticular segments contribute to storage and recycling, however, to varying extents. The mineral within the hepatopancreas after moult suggests an uptake of mineral from the ingested exuviae. The total maximum loss of mineral was 46% for the anterior and 43% for the posterior cuticle. The time course of resorption of mineral and mineralisation of the new cuticle suggests storage and recycling of mineral in the posterior and anterior cuticle. The mineral in the anterior pereiopods decreases by 25% only. P. scaber has long legs and can run fast; therefore, a less mineralised and thus lightweight cuticle in pereiopods likely serves to lower energy consumption during escape behaviour. Differential demineralisation occurs in the head cuticle, in which the cornea of the complex eyes remains completely mineralised. The partes incisivae of the mandibles are mineralised before the old cuticle is demineralised and shed. Probably, this enables the animal to ingest the old exuviae after each half moult. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Information and Entropy

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel

    2007-11-01

    What is information? Is it physical? We argue that in a Bayesian theory the notion of information must be defined in terms of its effects on the beliefs of rational agents. Information is whatever constrains rational beliefs and therefore it is the force that induces us to change our minds. This problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), which is designed for updating from arbitrary priors given information in the form of arbitrary constraints, includes as special cases both MaxEnt (which allows arbitrary constraints) and Bayes' rule (which allows arbitrary priors). Thus, ME unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme that allows us to handle problems that lie beyond the reach of either of the two methods separately. I conclude with a couple of simple illustrative examples.

  19. Factors affecting the impingement angle of fixed- and mobile-bearing total knee replacements: a laboratory study.

    PubMed

    Walker, Peter S; Yildirim, Gokce; Sussman-Fort, Jon; Roth, Jonathan; White, Brian; Klein, Gregg R

    2007-08-01

    Maximum flexion-or impingement angle-is defined as the angle of flexion when the posterior femoral cortex impacts the posterior edge of the tibial insert. We examined the effects of femoral component placement on the femur, the slope angle of the tibial component, the location of the femoral-tibial contact point, and the amount of internal or external rotation. Posterior and proximal femoral placement, a more posterior femoral-tibial contact point, and a more tibial slope all increased maximum flexion, whereas rotation reduced it. A mobile-bearing knee gave results similar to those of the fixed-bearing knee, but there was no loss of flexion in internal or external rotation if the mobile bearing moved with the femur. In the absence of negative factors, a flexion angle of 150 degrees can be reached before impingement.

  20. Bayesian model checking: A comparison of tests

    NASA Astrophysics Data System (ADS)

    Lucy, L. B.

    2018-06-01

    Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.

  1. Effect of posterior crown margin placement on gingival health.

    PubMed

    Reitemeier, Bernd; Hänsel, Kristina; Walter, Michael H; Kastner, Christian; Toutenburg, Helge

    2002-02-01

    The clinical impact of posterior crown margin placement on gingival health has not been thoroughly quantified. This study evaluated the effect of posterior crown margin placement with multivariate analysis. Ten general dentists reviewed 240 patients with 480 metal-ceramic crowns in a prospective clinical trial. The alloy was randomly selected from 2 high gold, 1 low gold, and 1 palladium alloy. Variables were the alloy used, oral hygiene index score before treatment, location of crown margins at baseline, and plaque index and sulcus bleeding index scores recorded for restored and control teeth after 1 year. The effect of crown margin placement on sulcular bleeding and plaque accumulation was analyzed with regression models (P<.05). The probability of plaque at 1 year increased with increasing oral hygiene index score before treatment. The lingual surfaces demonstrated the highest probability of plaque. The risk of bleeding at intrasulcular posterior crown margins was approximately twice that at supragingival margins. Poor oral hygiene before treatment and plaque also were associated with sulcular bleeding. Facial sites exhibited a lower probability of sulcular bleeding than lingual surfaces. Type of alloy did not influence sulcular bleeding. In this study, placement of crown margins was one of several parameters that affected gingival health.

  2. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    PubMed

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  3. Pig Data and Bayesian Inference on Multinomial Probabilities

    ERIC Educational Resources Information Center

    Kern, John C.

    2006-01-01

    Bayesian inference on multinomial probabilities is conducted based on data collected from the game Pass the Pigs[R]. Prior information on these probabilities is readily available from the instruction manual, and is easily incorporated in a Dirichlet prior. Posterior analysis of the scoring probabilities quantifies the discrepancy between empirical…

  4. Replicate phylogenies and post-glacial range expansion of the pitcher-plant mosquito, Wyeomyia smithii, in North America.

    PubMed

    Merz, Clayton; Catchen, Julian M; Hanson-Smith, Victor; Emerson, Kevin J; Bradshaw, William E; Holzapfel, Christina M

    2013-01-01

    Herein we tested the repeatability of phylogenetic inference based on high throughput sequencing by increased taxon sampling using our previously published techniques in the pitcher-plant mosquito, Wyeomyia smithii in North America. We sampled 25 natural populations drawn from different localities nearby 21 previous collection localities and used these new data to construct a second, independent phylogeny, expressly to test the reproducibility of phylogenetic patterns. Comparison of trees between the two data sets based on both maximum parsimony and maximum likelihood with Bayesian posterior probabilities showed close correspondence in the grouping of the most southern populations into clear clades. However, discrepancies emerged, particularly in the middle of W. smithii's current range near the previous maximum extent of the Laurentide Ice Sheet, especially concerning the most recent common ancestor to mountain and northern populations. Combining all 46 populations from both studies into a single maximum parsimony tree and taking into account the post-glacial historical biogeography of associated flora provided an improved picture of W. smithii's range expansion in North America. In a more general sense, we propose that extensive taxon sampling, especially in areas of known geological disruption is key to a comprehensive approach to phylogenetics that leads to biologically meaningful phylogenetic inference.

  5. Stan : A Probabilistic Programming Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  6. Traffic Video Image Segmentation Model Based on Bayesian and Spatio-Temporal Markov Random Field

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Bao, Xu; Li, Dawei; Yin, Yongwen

    2017-10-01

    Traffic video image is a kind of dynamic image and its background and foreground is changed at any time, which results in the occlusion. In this case, using the general method is more difficult to get accurate image segmentation. A segmentation algorithm based on Bayesian and Spatio-Temporal Markov Random Field is put forward, which respectively build the energy function model of observation field and label field to motion sequence image with Markov property, then according to Bayesian' rule, use the interaction of label field and observation field, that is the relationship of label field’s prior probability and observation field’s likelihood probability, get the maximum posterior probability of label field’s estimation parameter, use the ICM model to extract the motion object, consequently the process of segmentation is finished. Finally, the segmentation methods of ST - MRF and the Bayesian combined with ST - MRF were analyzed. Experimental results: the segmentation time in Bayesian combined with ST-MRF algorithm is shorter than in ST-MRF, and the computing workload is small, especially in the heavy traffic dynamic scenes the method also can achieve better segmentation effect.

  7. Bayesian source tracking via focalization and marginalization in an uncertain Mediterranean Sea environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L

    2010-07-01

    This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.

  8. Stan : A Probabilistic Programming Language

    DOE PAGES

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...

    2017-01-01

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  9. Lod scores for gene mapping in the presence of marker map uncertainty.

    PubMed

    Stringham, H M; Boehnke, M

    2001-07-01

    Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.

  10. Effect of Tibial Posterior Slope on Knee Kinematics, Quadriceps Force, and Patellofemoral Contact Force After Posterior-Stabilized Total Knee Arthroplasty.

    PubMed

    Okamoto, Shigetoshi; Mizu-uchi, Hideki; Okazaki, Ken; Hamai, Satoshi; Nakahara, Hiroyuki; Iwamoto, Yukihide

    2015-08-01

    We used a musculoskeletal model validated with in vivo data to evaluate the effect of tibial posterior slope on knee kinematics, quadriceps force, and patellofemoral contact force after posterior-stabilized total knee arthroplasty. The maximum quadriceps force and patellofemoral contact force decreased with increasing posterior slope. Anterior sliding of the tibial component and anterior impingement of the anterior aspect of the tibial post were observed with tibial posterior slopes of at least 5° and 10°, respectively. Increased tibial posterior slope contributes to improved exercise efficiency during knee extension, however excessive tibial posterior slope should be avoided to prevent knee instability. Based on our computer simulation we recommend tibial posterior slopes of less than 5° in posterior-stabilized total knee arthroplasty. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. National, regional, and global trends in systolic blood pressure since 1980: systematic analysis of health examination surveys and epidemiological studies with 786 country-years and 5·4 million participants.

    PubMed

    Danaei, Goodarz; Finucane, Mariel M; Lin, John K; Singh, Gitanjali M; Paciorek, Christopher J; Cowan, Melanie J; Farzadfar, Farshad; Stevens, Gretchen A; Lim, Stephen S; Riley, Leanne M; Ezzati, Majid

    2011-02-12

    Data for trends in blood pressure are needed to understand the effects of its dietary, lifestyle, and pharmacological determinants; set intervention priorities; and evaluate national programmes. However, few worldwide analyses of trends in blood pressure have been done. We estimated worldwide trends in population mean systolic blood pressure (SBP). We estimated trends and their uncertainties in mean SBP for adults 25 years and older in 199 countries and territories. We obtained data from published and unpublished health examination surveys and epidemiological studies (786 country-years and 5·4 million participants). For each sex, we used a Bayesian hierarchical model to estimate mean SBP by age, country, and year, accounting for whether a study was nationally representative. In 2008, age-standardised mean SBP worldwide was 128·1 mm Hg (95% uncertainty interval 126·7-129·4) in men and 124·4 mm Hg (123·0-125·9) in women. Globally, between 1980 and 2008, SBP decreased by 0·8 mm Hg per decade (-0·4 to 2·2, posterior probability of being a true decline=0·90) in men and 1·0 mm Hg per decade (-0·3 to 2·3, posterior probability=0·93) in women. Female SBP decreased by 3·5 mm Hg or more per decade in western Europe and Australasia (posterior probabilities ≥0·999). Male SBP fell most in high-income North America, by 2·8 mm Hg per decade (1·3-4·5, posterior probability >0·999), followed by Australasia and western Europe where it decreased by more than 2·0 mm Hg per decade (posterior probabilities >0·98). SBP rose in Oceania, east Africa, and south and southeast Asia for both sexes, and in west Africa for women, with the increases ranging 0·8-1·6 mm Hg per decade in men (posterior probabilities 0·72-0·91) and 1·0-2·7 mm Hg per decade for women (posterior probabilities 0·75-0·98). Female SBP was highest in some east and west African countries, with means of 135 mm Hg or greater. Male SBP was highest in Baltic and east and west African countries, where mean SBP reached 138 mm Hg or more. Men and women in western Europe had the highest SBP in high-income regions. On average, global population SBP decreased slightly since 1980, but trends varied significantly across regions and countries. SBP is currently highest in low-income and middle-income countries. Effective population-based and personal interventions should be targeted towards low-income and middle-income countries. Funding Bill & Melinda Gates Foundation and WHO. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. A robust method to forecast volcanic ash clouds

    USGS Publications Warehouse

    Denlinger, Roger P.; Pavolonis, Mike; Sieglaff, Justin

    2012-01-01

    Ash clouds emanating from volcanic eruption columns often form trails of ash extending thousands of kilometers through the Earth's atmosphere, disrupting air traffic and posing a significant hazard to air travel. To mitigate such hazards, the community charged with reducing flight risk must accurately assess risk of ash ingestion for any flight path and provide robust forecasts of volcanic ash dispersal. In response to this need, a number of different transport models have been developed for this purpose and applied to recent eruptions, providing a means to assess uncertainty in forecasts. Here we provide a framework for optimal forecasts and their uncertainties given any model and any observational data. This involves random sampling of the probability distributions of input (source) parameters to a transport model and iteratively running the model with different inputs, each time assessing the predictions that the model makes about ash dispersal by direct comparison with satellite data. The results of these comparisons are embodied in a likelihood function whose maximum corresponds to the minimum misfit between model output and observations. Bayes theorem is then used to determine a normalized posterior probability distribution and from that a forecast of future uncertainty in ash dispersal. The nature of ash clouds in heterogeneous wind fields creates a strong maximum likelihood estimate in which most of the probability is localized to narrow ranges of model source parameters. This property is used here to accelerate probability assessment, producing a method to rapidly generate a prediction of future ash concentrations and their distribution based upon assimilation of satellite data as well as model and data uncertainties. Applying this method to the recent eruption of Eyjafjallajökull in Iceland, we show that the 3 and 6 h forecasts of ash cloud location probability encompassed the location of observed satellite-determined ash cloud loads, providing an efficient means to assess all of the hazards associated with these ash clouds.

  13. An Alternative Version of Conditional Probabilities and Bayes' Rule: An Application of Probability Logic

    ERIC Educational Resources Information Center

    Satake, Eiki; Amato, Philip P.

    2008-01-01

    This paper presents an alternative version of formulas of conditional probabilities and Bayes' rule that demonstrate how the truth table of elementary mathematical logic applies to the derivations of the conditional probabilities of various complex, compound statements. This new approach is used to calculate the prior and posterior probabilities…

  14. A Probabilistic Strategy for Understanding Action Selection

    PubMed Central

    Kim, Byounghoon; Basso, Michele A.

    2010-01-01

    Brain regions involved in transforming sensory signals into movement commands are the likely sites where decisions are formed. Once formed, a decision must be read-out from the activity of populations of neurons to produce a choice of action. How this occurs remains unresolved. We recorded from four superior colliculus (SC) neurons simultaneously while monkeys performed a target selection task. We implemented three models to gain insight into the computational principles underlying population coding of action selection. We compared the population vector average (PVA), winner-takes-all (WTA) and a Bayesian model, maximum a posteriori estimate (MAP) to determine which predicted choices most often. The probabilistic model predicted more trials correctly than both the WTA and the PVA. The MAP model predicted 81.88% whereas WTA predicted 71.11% and PVA/OLE predicted the least number of trials at 55.71 and 69.47%. Recovering MAP estimates using simulated, non-uniform priors that correlated with monkeys’ choice performance, improved the accuracy of the model by 2.88%. A dynamic analysis revealed that the MAP estimate evolved over time and the posterior probability of the saccade choice reached a maximum at the time of the saccade. MAP estimates also scaled with choice performance accuracy. Although there was overlap in the prediction abilities of all the models, we conclude that movement choice from populations of neurons may be best understood by considering frameworks based on probability. PMID:20147560

  15. Diagnostics for insufficiencies of posterior calculations in Bayesian signal inference.

    PubMed

    Dorn, Sebastian; Oppermann, Niels; Ensslin, Torsten A

    2013-11-01

    We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.

  16. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE PAGES

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    2016-01-01

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  17. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  18. Applications of quantum entropy to statistics

    NASA Astrophysics Data System (ADS)

    Silver, R. N.; Martz, H. F.

    This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.

  19. Reduction of Poisson noise in measured time-resolved data for time-domain diffuse optical tomography.

    PubMed

    Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y

    2012-01-01

    A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.

  20. Knee point search using cascading top-k sorting with minimized time complexity.

    PubMed

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  1. Screening for SNPs with Allele-Specific Methylation based on Next-Generation Sequencing Data.

    PubMed

    Hu, Bo; Ji, Yuan; Xu, Yaomin; Ting, Angela H

    2013-05-01

    Allele-specific methylation (ASM) has long been studied but mainly documented in the context of genomic imprinting and X chromosome inactivation. Taking advantage of the next-generation sequencing technology, we conduct a high-throughput sequencing experiment with four prostate cell lines to survey the whole genome and identify single nucleotide polymorphisms (SNPs) with ASM. A Bayesian approach is proposed to model the counts of short reads for each SNP conditional on its genotypes of multiple subjects, leading to a posterior probability of ASM. We flag SNPs with high posterior probabilities of ASM by accounting for multiple comparisons based on posterior false discovery rates. Applying the Bayesian approach to the in-house prostate cell line data, we identify 269 SNPs as candidates of ASM. A simulation study is carried out to demonstrate the quantitative performance of the proposed approach.

  2. Multinomial Logistic Regression & Bootstrapping for Bayesian Estimation of Vertical Facies Prediction in Heterogeneous Sandstone Reservoirs

    NASA Astrophysics Data System (ADS)

    Al-Mudhafar, W. J.

    2013-12-01

    Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance. In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability. Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values. The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product. Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly drawing datasets with replacement from the training data. Each sample has the same size of the original training set and it can be conducted N times to produce N bootstrap datasets to re-fit the model accordingly to decrease the squared difference between the estimated and observed categorical variables (facies) leading to decrease the degree of uncertainty.

  3. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  4. Bayesian multiple-source localization in an uncertain ocean environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2011-06-01

    This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America

  5. A sit-ski design aimed at controlling centre of mass and inertia.

    PubMed

    Langelier, Eve; Martel, Stéphane; Millot, Anne; Lessard, Jean-Luc; Smeesters, Cécile; Rancourt, Denis

    2013-01-01

    This article introduces a sit-ski developed for the Canadian Alpine Ski Team in view of the Vancouver 2010 Paralympic games. The design is predominantly based on controlling the mass distribution of the sit-ski, a critical factor in skiing performance and control. Both the antero-posterior location of the centre of mass and the sit-ski moment of inertia were addressed in our design. Our design provides means to adjust the antero-posterior centre of mass location of a sit-ski to compensate for masses that would tend to move the antero-posterior centre of mass location away from the midline of the binding area along the ski axis. The adjustment range provided is as large as 140 mm, thereby providing sufficient adaptability for most situations. The suspension mechanism selected is a four-bar linkage optimised to limit antero-posterior seat movement, due to suspension compression, to 7 mm maximum. This is about 5% of the maximum antero-posterior centre of mass control capacity (151 mm) of a human participant. Foot rest inclination was included in the design to modify the sit-ski inertia by as much as 11%. Together, these mass adjustment features were shown to drastically help athletes' skiing performance.

  6. An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems

    PubMed Central

    Dawson, Kevin J.; Belkhir, Khalid

    2009-01-01

    Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306

  7. Central posterior capsule pigmentation in a patient with pigment dispersion and previous ocular trauma: a case report.

    PubMed

    Al-Mezaine, Hani S

    2010-01-01

    We report a 55-year-old man with unusually dense, unilateral central posterior capsule pigmentation associated with the characteristic clinical features of pigment dispersion syndrome, including a Krukenberg's spindle and dense trabecular pigmentation in both eyes. A history of an old blunt ocular trauma probably caused separation of the anterior hyaloid from the back of the lens, thereby creating an avenue by which pigment could reach the potential space of Berger's from the posterior chamber.

  8. Central posterior capsule pigmentation in a patient with pigment dispersion and previous ocular trauma: A case report

    PubMed Central

    Al-Mezaine, Hani S

    2010-01-01

    We report a 55-year-old man with unusually dense, unilateral central posterior capsule pigmentation associated with the characteristic clinical features of pigment dispersion syndrome, including a Krukenberg's spindle and dense trabecular pigmentation in both eyes. A history of an old blunt ocular trauma probably caused separation of the anterior hyaloid from the back of the lens, thereby creating an avenue by which pigment could reach the potential space of Berger's from the posterior chamber. PMID:20534930

  9. Screening for SNPs with Allele-Specific Methylation based on Next-Generation Sequencing Data

    PubMed Central

    Hu, Bo; Xu, Yaomin

    2013-01-01

    Allele-specific methylation (ASM) has long been studied but mainly documented in the context of genomic imprinting and X chromosome inactivation. Taking advantage of the next-generation sequencing technology, we conduct a high-throughput sequencing experiment with four prostate cell lines to survey the whole genome and identify single nucleotide polymorphisms (SNPs) with ASM. A Bayesian approach is proposed to model the counts of short reads for each SNP conditional on its genotypes of multiple subjects, leading to a posterior probability of ASM. We flag SNPs with high posterior probabilities of ASM by accounting for multiple comparisons based on posterior false discovery rates. Applying the Bayesian approach to the in-house prostate cell line data, we identify 269 SNPs as candidates of ASM. A simulation study is carried out to demonstrate the quantitative performance of the proposed approach. PMID:23710259

  10. Morphological and molecular data reveal a new species of Neoechinorhynchus (Acanthocephala: Neoechinorhynchidae) from Dormitator maculatus in the Gulf of Mexico.

    PubMed

    Pinacho-Pinacho, Carlos Daniel; Sereno-Uribe, Ana L; García-Varela, Martín

    2014-12-01

    Neoechinorhynchus (Neoechinorhynchus) mexicoensis sp. n. is described from the intestine of Dormitator maculatus (Bloch 1792) collected in 5 coastal localities from the Gulf of Mexico. The new species is mainly distinguished from the other 33 described species of Neoechinorhynchus from the Americas associated with freshwater, marine and brackish fishes by having smaller middle and posterior hooks and possessing a small proboscis with three rows of six hooks each, apical hooks longer than other hooks and extending to the same level as the posterior hooks, 1 giant nucleus in the ventral body wall and females with eggs longer than other congeneric species. Sequences of the internal transcribed spacer (ITS) and the large subunit (LSU) of ribosomal DNA including the domain D2+D3 were used independently to corroborate the morphological distinction among the new species and other congeneric species associated with freshwater and brackish water fish from Mexico. The genetic divergence estimated among congeneric species ranged from 7.34 to 44% for ITS and from 1.65 to 32.9% for LSU. Maximum likelihood and Bayesian inference analyses with each dataset showed that the 25 specimens analyzed from 5 localities of the coast of the Gulf of Mexico parasitizing D. maculatus represent an independent clade with strong bootstrap support and posterior probabilities. The morphological evidence, plus the monophyly in the phylogenetic analyses, indicates that the acanthocephalans collected from intestine of D. maculatus from the Gulf of Mexico represent a new species, herein named N. (N.) mexicoensis sp. n. Copyright © 2014. Published by Elsevier Ireland Ltd.

  11. Birth/birth-death processes and their computable transition probabilities with biological applications.

    PubMed

    Ho, Lam Si Tung; Xu, Jason; Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2018-03-01

    Birth-death processes track the size of a univariate population, but many biological systems involve interaction between populations, necessitating models for two or more populations simultaneously. A lack of efficient methods for evaluating finite-time transition probabilities of bivariate processes, however, has restricted statistical inference in these models. Researchers rely on computationally expensive methods such as matrix exponentiation or Monte Carlo approximation, restricting likelihood-based inference to small systems, or indirect methods such as approximate Bayesian computation. In this paper, we introduce the birth/birth-death process, a tractable bivariate extension of the birth-death process, where rates are allowed to be nonlinear. We develop an efficient algorithm to calculate its transition probabilities using a continued fraction representation of their Laplace transforms. Next, we identify several exemplary models arising in molecular epidemiology, macro-parasite evolution, and infectious disease modeling that fall within this class, and demonstrate advantages of our proposed method over existing approaches to inference in these models. Notably, the ubiquitous stochastic susceptible-infectious-removed (SIR) model falls within this class, and we emphasize that computable transition probabilities newly enable direct inference of parameters in the SIR model. We also propose a very fast method for approximating the transition probabilities under the SIR model via a novel branching process simplification, and compare it to the continued fraction representation method with application to the 17th century plague in Eyam. Although the two methods produce similar maximum a posteriori estimates, the branching process approximation fails to capture the correlation structure in the joint posterior distribution.

  12. A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…

  13. Generative adversarial networks for brain lesion detection

    NASA Astrophysics Data System (ADS)

    Alex, Varghese; Safwan, K. P. Mohammed; Chennamsetty, Sai Saketh; Krishnamurthi, Ganapathy

    2017-02-01

    Manual segmentation of brain lesions from Magnetic Resonance Images (MRI) is cumbersome and introduces errors due to inter-rater variability. This paper introduces a semi-supervised technique for detection of brain lesion from MRI using Generative Adversarial Networks (GANs). GANs comprises of a Generator network and a Discriminator network which are trained simultaneously with the objective of one bettering the other. The networks were trained using non lesion patches (n=13,000) from 4 different MR sequences. The network was trained on BraTS dataset and patches were extracted from regions excluding tumor region. The Generator network generates data by modeling the underlying probability distribution of the training data, (PData). The Discriminator learns the posterior probability P (Label Data) by classifying training data and generated data as "Real" or "Fake" respectively. The Generator upon learning the joint distribution, produces images/patches such that the performance of the Discriminator on them are random, i.e. P (Label Data = GeneratedData) = 0.5. During testing, the Discriminator assigns posterior probability values close to 0.5 for patches from non lesion regions, while patches centered on lesion arise from a different distribution (PLesion) and hence are assigned lower posterior probability value by the Discriminator. On the test set (n=14), the proposed technique achieves whole tumor dice score of 0.69, sensitivity of 91% and specificity of 59%. Additionally the generator network was capable of generating non lesion patches from various MR sequences.

  14. Graphical methods for the sensitivity analysis in discriminant analysis

    DOE PAGES

    Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang

    2015-09-30

    Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less

  15. Bayesian selection of misspecified models is overconfident and may cause spurious posterior probabilities for phylogenetic trees.

    PubMed

    Yang, Ziheng; Zhu, Tianqi

    2018-02-20

    The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.

  16. A Bayesian Approach to Evaluating Consistency between Climate Model Output and Observations

    NASA Astrophysics Data System (ADS)

    Braverman, A. J.; Cressie, N.; Teixeira, J.

    2010-12-01

    Like other scientific and engineering problems that involve physical modeling of complex systems, climate models can be evaluated and diagnosed by comparing their output to observations of similar quantities. Though the global remote sensing data record is relatively short by climate research standards, these data offer opportunities to evaluate model predictions in new ways. For example, remote sensing data are spatially and temporally dense enough to provide distributional information that goes beyond simple moments to allow quantification of temporal and spatial dependence structures. In this talk, we propose a new method for exploiting these rich data sets using a Bayesian paradigm. For a collection of climate models, we calculate posterior probabilities its members best represent the physical system each seeks to reproduce. The posterior probability is based on the likelihood that a chosen summary statistic, computed from observations, would be obtained when the model's output is considered as a realization from a stochastic process. By exploring how posterior probabilities change with different statistics, we may paint a more quantitative and complete picture of the strengths and weaknesses of the models relative to the observations. We demonstrate our method using model output from the CMIP archive, and observations from NASA's Atmospheric Infrared Sounder.

  17. Topics in inference and decision-making with partial knowledge

    NASA Technical Reports Server (NTRS)

    Safavian, S. Rasoul; Landgrebe, David

    1990-01-01

    Two essential elements needed in the process of inference and decision-making are prior probabilities and likelihood functions. When both of these components are known accurately and precisely, the Bayesian approach provides a consistent and coherent solution to the problems of inference and decision-making. In many situations, however, either one or both of the above components may not be known, or at least may not be known precisely. This problem of partial knowledge about prior probabilities and likelihood functions is addressed. There are at least two ways to cope with this lack of precise knowledge: robust methods, and interval-valued methods. First, ways of modeling imprecision and indeterminacies in prior probabilities and likelihood functions are examined; then how imprecision in the above components carries over to the posterior probabilities is examined. Finally, the problem of decision making with imprecise posterior probabilities and the consequences of such actions are addressed. Application areas where the above problems may occur are in statistical pattern recognition problems, for example, the problem of classification of high-dimensional multispectral remote sensing image data.

  18. A Bayesian predictive two-stage design for phase II clinical trials.

    PubMed

    Sambucini, Valeria

    2008-04-15

    In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.

  19. Extremely long posterior communicating artery diagnosed by MR angiography: report of two cases.

    PubMed

    Uchino, Akira; Suzuki, Chihiro; Tanaka, Masahiko

    2015-07-01

    We report two cases of an extremely long left posterior communicating artery (PCoA) diagnosed by magnetic resonance (MR) angiography. The PCoA arose from the normal point of the supraclinoid internal carotid artery and fused with the posterior cerebral artery (PCA) at its posterior ambient segment, forming an extremely long PCoA and extremely long precommunicating segment of the PCA. To our knowledge, this is the first report of such variation. Careful observation of MR angiographic images is important for detecting rare arterial variations. To identify these anomalous arteries on MR angiography, partial maximum-intensity-projection images are useful.

  20. Effects of astigmatic axis orientation on postural stabilization with stationary equilibrium

    NASA Astrophysics Data System (ADS)

    Kanazawa, Masatsugu; Uozato, Hiroshi; Asakawa, Ken; Kawamorita, Takushi

    2018-02-01

    We evaluated 15 healthy participants by assessing their maintenance of postural control while standing on a platform stabilometer for 1 min under the following conditions: eyes open; eyes open with + 3.00 D on both eyes on same directions (45, 90, 135, 180 degree axis); right eye on 45 degree axis and left eye on 135 degree axis (inverted V-pattern), and right eye on 135 degree axis and left eye on axis 45 degree axis (V-pattern). The differences in the linear length, area and maximum velocity of center of pressure during postural control before and after the six types of positive cylinder-oriented axes were analyzed. Comparing the antero-posterior lengths and antero-posterior maximum velocities, there were significant differences between the V-pattern condition and the six other conditions. Astigmatic defocus in the antagonistic axes conditions, particularly the V-pattern condition, affects postural control of antero-posterior sway (143/150).

  1. Bayesian operational modal analysis with asynchronous data, Part II: Posterior uncertainty

    NASA Astrophysics Data System (ADS)

    Zhu, Yi-Chen; Au, Siu-Kui

    2018-01-01

    A Bayesian modal identification method has been proposed in the companion paper that allows the most probable values of modal parameters to be determined using asynchronous ambient vibration data. This paper investigates the identification uncertainty of modal parameters in terms of their posterior covariance matrix. Computational issues are addressed. Analytical expressions are derived to allow the posterior covariance matrix to be evaluated accurately and efficiently. Synthetic, laboratory and field data examples are presented to verify the consistency, investigate potential modelling error and demonstrate practical applications.

  2. PMP Documents-HDSC/OWP

    Science.gov Websites

    Hydrometeorological Report No. 39 Probable Maximum Precipitation in the Hawaiian Islands 1963 Hydrometeorological Report No. 41 Probable Maximum and TVA Precipitation over the Tennessee River Basin above Chattanooga 1965 Hydrometeorological Report No. 46 Probable Maximum Precipitation, Mekong River Basin 1970

  3. Statistical Inference in Hidden Markov Models Using k-Segment Constraints

    PubMed Central

    Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher

    2016-01-01

    Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674

  4. Elastic K-means using posterior probability.

    PubMed

    Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris

    2017-01-01

    The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model.

  5. Use of Bayesian Inference in Crystallographic Structure Refinement via Full Diffraction Profile Analysis

    PubMed Central

    Fancher, Chris M.; Han, Zhen; Levin, Igor; Page, Katharine; Reich, Brian J.; Smith, Ralph C.; Wilson, Alyson G.; Jones, Jacob L.

    2016-01-01

    A Bayesian inference method for refining crystallographic structures is presented. The distribution of model parameters is stochastically sampled using Markov chain Monte Carlo. Posterior probability distributions are constructed for all model parameters to properly quantify uncertainty by appropriately modeling the heteroskedasticity and correlation of the error structure. The proposed method is demonstrated by analyzing a National Institute of Standards and Technology silicon standard reference material. The results obtained by Bayesian inference are compared with those determined by Rietveld refinement. Posterior probability distributions of model parameters provide both estimates and uncertainties. The new method better estimates the true uncertainties in the model as compared to the Rietveld method. PMID:27550221

  6. Quantifying uncertainty in soot volume fraction estimates using Bayesian inference of auto-correlated laser-induced incandescence measurements

    NASA Astrophysics Data System (ADS)

    Hadwin, Paul J.; Sipkens, T. A.; Thomson, K. A.; Liu, F.; Daun, K. J.

    2016-01-01

    Auto-correlated laser-induced incandescence (AC-LII) infers the soot volume fraction (SVF) of soot particles by comparing the spectral incandescence from laser-energized particles to the pyrometrically inferred peak soot temperature. This calculation requires detailed knowledge of model parameters such as the absorption function of soot, which may vary with combustion chemistry, soot age, and the internal structure of the soot. This work presents a Bayesian methodology to quantify such uncertainties. This technique treats the additional "nuisance" model parameters, including the soot absorption function, as stochastic variables and incorporates the current state of knowledge of these parameters into the inference process through maximum entropy priors. While standard AC-LII analysis provides a point estimate of the SVF, Bayesian techniques infer the posterior probability density, which will allow scientists and engineers to better assess the reliability of AC-LII inferred SVFs in the context of environmental regulations and competing diagnostics.

  7. Early and late mammalian responses to heavy charged particles

    NASA Technical Reports Server (NTRS)

    Ainsworth, E. J.

    1986-01-01

    This overview summarizes murine results on acute lethality responses, inactivation of marrow CFU-S and intestinal microcolonies, testes weight loss, life span shortening, and posterior lens opacification in mice irradiated with heavy charged particles. RBE-LET relationships for these mammalian responses are compared with results from in vitro studies. The trend is that the maximum RBE for in vivo responses tends to be lower and occurs at a lower LET than for inactivation of V79 and T-1 cells in culture. Based on inactivation cross sections, the response of CFU-S in vivo conforms to expectations from earlier studies with prokaryotic systems and mammalian cells in culture. Effects of heavy ions are compared with fission spectrum neutrons, and the results are consistent with the interpretation that RBEs are lower than for fission neutrons at about the same LET, probably due to differences in track structure.

  8. A new method for locating changes in a tree reveals distinct nucleotide polymorphism vs. divergence patterns in mouse mitochondrial control region.

    PubMed

    Galtier, N; Boursot, P

    2000-03-01

    A new, model-based method was devised to locate nucleotide changes in a given phylogenetic tree. For each site, the posterior probability of any possible change in each branch of the tree is computed. This probabilistic method is a valuable alternative to the maximum parsimony method when base composition is skewed (i.e., different from 25% A, 25% C, 25% G, 25% T): computer simulations showed that parsimony misses more rare --> common than common --> rare changes, resulting in biased inferred change matrices, whereas the new method appeared unbiased. The probabilistic method was applied to the analysis of the mutation and substitution processes in the mitochondrial control region of mouse. Distinct change patterns were found at the polymorphism (within species) and divergence (between species) levels, rejecting the hypothesis of a neutral evolution of base composition in mitochondrial DNA.

  9. Detection and recognition of targets by using signal polarization properties

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Peralta-Fabi, Ricardo; Popov, Anatoly V.; Babakov, Mikhail F.

    1999-08-01

    The quality of radar target recognition can be enhanced by exploiting its polarization signatures. A specialized X-band polarimetric radar was used for target recognition in experimental investigations. The following polarization characteristics connected to the object geometrical properties were investigated: the amplitudes of the polarization matrix elements; an anisotropy coefficient; depolarization coefficient; asymmetry coefficient; the energy of a backscattering signal; object shape factor. A large quantity of polarimetric radar data was measured and processed to form a database of different object and different weather conditions. The histograms of polarization signatures were approximated by a Nakagami distribution, then used for real- time target recognition. The Neyman-Pearson criterion was used for the target detection, and the criterion of the maximum of a posterior probability was used for recognition problem. Some results of experimental verification of pattern recognition and detection of objects with different electrophysical and geometrical characteristics urban in clutter are presented in this paper.

  10. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    NASA Astrophysics Data System (ADS)

    De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles

    2017-09-01

    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.

  11. Attentional Demands Predict Short-Term Memory Load Response in Posterior Parietal Cortex

    ERIC Educational Resources Information Center

    Magen, Hagit; Emmanouil, Tatiana-Aloi; McMains, Stephanie A.; Kastner, Sabine; Treisman, Anne

    2009-01-01

    Limits to the capacity of visual short-term memory (VSTM) indicate a maximum storage of only 3 or 4 items. Recently, it has been suggested that activity in a specific part of the brain, the posterior parietal cortex (PPC), is correlated with behavioral estimates of VSTM capacity and might reflect a capacity-limited store. In three experiments that…

  12. Bayesian statistical inference enhances the interpretation of contemporary randomized controlled trials.

    PubMed

    Wijeysundera, Duminda N; Austin, Peter C; Hux, Janet E; Beattie, W Scott; Laupacis, Andreas

    2009-01-01

    Randomized trials generally use "frequentist" statistics based on P-values and 95% confidence intervals. Frequentist methods have limitations that might be overcome, in part, by Bayesian inference. To illustrate these advantages, we re-analyzed randomized trials published in four general medical journals during 2004. We used Medline to identify randomized superiority trials with two parallel arms, individual-level randomization and dichotomous or time-to-event primary outcomes. Studies with P<0.05 in favor of the intervention were deemed "positive"; otherwise, they were "negative." We used several prior distributions and exact conjugate analyses to calculate Bayesian posterior probabilities for clinically relevant effects. Of 88 included studies, 39 were positive using a frequentist analysis. Although the Bayesian posterior probabilities of any benefit (relative risk or hazard ratio<1) were high in positive studies, these probabilities were lower and variable for larger benefits. The positive studies had only moderate probabilities for exceeding the effects that were assumed for calculating the sample size. By comparison, there were moderate probabilities of any benefit in negative studies. Bayesian and frequentist analyses complement each other when interpreting the results of randomized trials. Future reports of randomized trials should include both.

  13. Assessment of accident severity in the construction industry using the Bayesian theorem.

    PubMed

    Alizadeh, Seyed Shamseddin; Mortazavi, Seyed Bagher; Mehdi Sepehri, Mohammad

    2015-01-01

    Construction is a major source of employment in many countries. In construction, workers perform a great diversity of activities, each one with a specific associated risk. The aim of this paper is to identify workers who are at risk of accidents with severe consequences and classify these workers to determine appropriate control measures. We defined 48 groups of workers and used the Bayesian theorem to estimate posterior probabilities about the severity of accidents at the level of individuals in construction sector. First, the posterior probabilities of injuries based on four variables were provided. Then the probabilities of injury for 48 groups of workers were determined. With regard to marginal frequency of injury, slight injury (0.856), fatal injury (0.086) and severe injury (0.058) had the highest probability of occurrence. It was observed that workers with <1 year's work experience (0.168) had the highest probability of injury occurrence. The first group of workers, who were extensively exposed to risk of severe and fatal accidents, involved workers ≥ 50 years old, married, with 1-5 years' work experience, who had no past accident experience. The findings provide a direction for more effective safety strategies and occupational accident prevention and emergency programmes.

  14. Detection of mastitis in dairy cattle by use of mixture models for repeated somatic cell scores: a Bayesian approach via Gibbs sampling.

    PubMed

    Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B

    2003-11-01

    The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.

  15. On the use of posterior predictive probabilities and prediction uncertainty to tailor informative sampling for parasitological surveillance in livestock.

    PubMed

    Musella, Vincenzo; Rinaldi, Laura; Lagazio, Corrado; Cringoli, Giuseppe; Biggeri, Annibale; Catelan, Dolores

    2014-09-15

    Model-based geostatistics and Bayesian approaches are appropriate in the context of Veterinary Epidemiology when point data have been collected by valid study designs. The aim is to predict a continuous infection risk surface. Little work has been done on the use of predictive infection probabilities at farm unit level. In this paper we show how to use predictive infection probability and related uncertainty from a Bayesian kriging model to draw a informative samples from the 8794 geo-referenced sheep farms of the Campania region (southern Italy). Parasitological data come from a first cross-sectional survey carried out to study the spatial distribution of selected helminths in sheep farms. A grid sampling was performed to select the farms for coprological examinations. Faecal samples were collected for 121 sheep farms and the presence of 21 different helminths were investigated using the FLOTAC technique. The 21 responses are very different in terms of geographical distribution and prevalence of infection. The observed prevalence range is from 0.83% to 96.69%. The distributions of the posterior predictive probabilities for all the 21 parasites are very heterogeneous. We show how the results of the Bayesian kriging model can be used to plan a second wave survey. Several alternatives can be chosen depending on the purposes of the second survey: weight by posterior predictive probabilities, their uncertainty or combining both information. The proposed Bayesian kriging model is simple, and the proposed samping strategy represents a useful tool to address targeted infection control treatments and surbveillance campaigns. It is easily extendable to other fields of research. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Phylogenetic relationships of Malaysia’s long-tailed macaques, Macaca fascicularis, based on cytochrome b sequences

    PubMed Central

    Abdul-Latiff, Muhammad Abu Bakar; Ruslin, Farhani; Fui, Vun Vui; Abu, Mohd-Hashim; Rovie-Ryan, Jeffrine Japning; Abdul-Patah, Pazil; Lakim, Maklarin; Roos, Christian; Yaakop, Salmah; Md-Zain, Badrul Munir

    2014-01-01

    Abstract Phylogenetic relationships among Malaysia’s long-tailed macaques have yet to be established, despite abundant genetic studies of the species worldwide. The aims of this study are to examine the phylogenetic relationships of Macaca fascicularis in Malaysia and to test its classification as a morphological subspecies. A total of 25 genetic samples of M. fascicularis yielding 383 bp of Cytochrome b (Cyt b) sequences were used in phylogenetic analysis along with one sample each of M. nemestrina and M. arctoides used as outgroups. Sequence character analysis reveals that Cyt b locus is a highly conserved region with only 23% parsimony informative character detected among ingroups. Further analysis indicates a clear separation between populations originating from different regions; the Malay Peninsula versus Borneo Insular, the East Coast versus West Coast of the Malay Peninsula, and the island versus mainland Malay Peninsula populations. Phylogenetic trees (NJ, MP and Bayesian) portray a consistent clustering paradigm as Borneo’s population was distinguished from Peninsula’s population (99% and 100% bootstrap value in NJ and MP respectively and 1.00 posterior probability in Bayesian trees). The East coast population was separated from other Peninsula populations (64% in NJ, 66% in MP and 0.53 posterior probability in Bayesian). West coast populations were divided into 2 clades: the North-South (47%/54% in NJ, 26/26% in MP and 1.00/0.80 posterior probability in Bayesian) and Island-Mainland (93% in NJ, 90% in MP and 1.00 posterior probability in Bayesian). The results confirm the previous morphological assignment of 2 subspecies, M. f. fascicularis and M. f. argentimembris, in the Malay Peninsula. These populations should be treated as separate genetic entities in order to conserve the genetic diversity of Malaysia’s M. fascicularis. These findings are crucial in aiding the conservation management and translocation process of M. fascicularis populations in Malaysia. PMID:24899832

  17. Phylogenetic relationships of Malaysia's long-tailed macaques, Macaca fascicularis, based on cytochrome b sequences.

    PubMed

    Abdul-Latiff, Muhammad Abu Bakar; Ruslin, Farhani; Fui, Vun Vui; Abu, Mohd-Hashim; Rovie-Ryan, Jeffrine Japning; Abdul-Patah, Pazil; Lakim, Maklarin; Roos, Christian; Yaakop, Salmah; Md-Zain, Badrul Munir

    2014-01-01

    Phylogenetic relationships among Malaysia's long-tailed macaques have yet to be established, despite abundant genetic studies of the species worldwide. The aims of this study are to examine the phylogenetic relationships of Macaca fascicularis in Malaysia and to test its classification as a morphological subspecies. A total of 25 genetic samples of M. fascicularis yielding 383 bp of Cytochrome b (Cyt b) sequences were used in phylogenetic analysis along with one sample each of M. nemestrina and M. arctoides used as outgroups. Sequence character analysis reveals that Cyt b locus is a highly conserved region with only 23% parsimony informative character detected among ingroups. Further analysis indicates a clear separation between populations originating from different regions; the Malay Peninsula versus Borneo Insular, the East Coast versus West Coast of the Malay Peninsula, and the island versus mainland Malay Peninsula populations. Phylogenetic trees (NJ, MP and Bayesian) portray a consistent clustering paradigm as Borneo's population was distinguished from Peninsula's population (99% and 100% bootstrap value in NJ and MP respectively and 1.00 posterior probability in Bayesian trees). The East coast population was separated from other Peninsula populations (64% in NJ, 66% in MP and 0.53 posterior probability in Bayesian). West coast populations were divided into 2 clades: the North-South (47%/54% in NJ, 26/26% in MP and 1.00/0.80 posterior probability in Bayesian) and Island-Mainland (93% in NJ, 90% in MP and 1.00 posterior probability in Bayesian). The results confirm the previous morphological assignment of 2 subspecies, M. f. fascicularis and M. f. argentimembris, in the Malay Peninsula. These populations should be treated as separate genetic entities in order to conserve the genetic diversity of Malaysia's M. fascicularis. These findings are crucial in aiding the conservation management and translocation process of M. fascicularis populations in Malaysia.

  18. A Bayesian Method for Evaluating and Discovering Disease Loci Associations

    PubMed Central

    Jiang, Xia; Barmada, M. Michael; Cooper, Gregory F.; Becich, Michael J.

    2011-01-01

    Background A genome-wide association study (GWAS) typically involves examining representative SNPs in individuals from some population. A GWAS data set can concern a million SNPs and may soon concern billions. Researchers investigate the association of each SNP individually with a disease, and it is becoming increasingly commonplace to also analyze multi-SNP associations. Techniques for handling so many hypotheses include the Bonferroni correction and recently developed Bayesian methods. These methods can encounter problems. Most importantly, they are not applicable to a complex multi-locus hypothesis which has several competing hypotheses rather than only a null hypothesis. A method that computes the posterior probability of complex hypotheses is a pressing need. Methodology/Findings We introduce the Bayesian network posterior probability (BNPP) method which addresses the difficulties. The method represents the relationship between a disease and SNPs using a directed acyclic graph (DAG) model, and computes the likelihood of such models using a Bayesian network scoring criterion. The posterior probability of a hypothesis is computed based on the likelihoods of all competing hypotheses. The BNPP can not only be used to evaluate a hypothesis that has previously been discovered or suspected, but also to discover new disease loci associations. The results of experiments using simulated and real data sets are presented. Our results concerning simulated data sets indicate that the BNPP exhibits both better evaluation and discovery performance than does a p-value based method. For the real data sets, previous findings in the literature are confirmed and additional findings are found. Conclusions/Significance We conclude that the BNPP resolves a pressing problem by providing a way to compute the posterior probability of complex multi-locus hypotheses. A researcher can use the BNPP to determine the expected utility of investigating a hypothesis further. Furthermore, we conclude that the BNPP is a promising method for discovering disease loci associations. PMID:21853025

  19. Anatomical characteristics of greater palatine foramen: a novel point of view.

    PubMed

    Gibelli, Daniele; Borlando, Alessia; Dolci, Claudia; Pucciarelli, Valentina; Cattaneo, Cristina; Sforza, Chiarella

    2017-12-01

    Anatomy of greater palatine foramen is important for maxillary nerve blocks, haemostatic procedures, and the treatment of neuralgia; although metrical data are available about its collocation, still several aspects need to be explored, such as the influence of the cranium size. The position of greater palatine foramen was assessed on 100 skulls through six measurements (distances from intermaxillary suture, posterior palatal border, posterior nasal spine, and incisive foramen; palatal length; relative position on palatal length) and two angles (angles at incisive foramen and greater palatine foramen). Maximum cranial length, maximum cranial breadth, cranial height and bizygomatic breadth, horizontal cephalic index, and Giardina Y-index were evaluated. Possible differences according to sex and side were assessed through two-way ANOVA (p < 0.05). Measurements showing sexual dimorphism were further assessed through one-way ANCOVA including cranial parameters as covariates (p < 0.05). Distances of the greater palatine foramen from intermaxillary suture, incisive foramen, posterior palatal border, posterior nasal spine, palatal length, and position of the greater palatine foramen on the palatal length were statistically different according to sex (p < 0.05), independently from general cranial dimensions but for the distance from the posterior palatal border. The angle at the incisive foramen and distances from intermaxillary suture and from posterior nasal spine showed statistically significant differences according to side (p < 0.05). Results highlight that most of sexually dimorphic measurements useful for pinpointing the greater palatal foramen do not depend upon the cranium size. A more complete metrical assessment of the localization of the greater palatine foramen was provided.

  20. Understanding seasonal variability of uncertainty in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Li, M.; Wang, Q. J.

    2012-04-01

    Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.

  1. Efficient Bayesian parameter estimation with implicit sampling and surrogate modeling for a vadose zone hydrological problem

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Pau, G. S. H.; Finsterle, S.

    2015-12-01

    Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simu­lated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure for the hydrological problem considered. This work was supported, in part, by the U.S. Dept. of Energy under Contract No. DE-AC02-05CH11231

  2. Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR)

    NASA Astrophysics Data System (ADS)

    Peters, Christina; Malz, Alex; Hlozek, Renée

    2018-01-01

    The Bayesian Estimation Applied to Multiple Species (BEAMS) framework employs probabilistic supernova type classifications to do photometric SN cosmology. This work extends BEAMS to replace high-confidence spectroscopic redshifts with photometric redshift probability density functions, a capability that will be essential in the era the Large Synoptic Survey Telescope and other next-generation photometric surveys where it will not be possible to perform spectroscopic follow up on every SN. We present the Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR) Bayesian hierarchical model for constraining the cosmological parameters from photometric lightcurves and host galaxy photometry, which includes selection effects and is extensible to uncertainty in the redshift-dependent supernova type proportions. We create a pair of realistic mock catalogs of joint posteriors over supernova type, redshift, and distance modulus informed by photometric supernova lightcurves and over redshift from simulated host galaxy photometry. We perform inference under our model to obtain a joint posterior probability distribution over the cosmological parameters and compare our results with other methods, namely: a spectroscopic subset, a subset of high probability photometrically classified supernovae, and reducing the photometric redshift probability to a single measurement and error bar.

  3. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    ERIC Educational Resources Information Center

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  4. Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations

    NASA Astrophysics Data System (ADS)

    Sandhu, Rimple; Poirel, Dominique; Pettit, Chris; Khalil, Mohammad; Sarkar, Abhijit

    2016-07-01

    A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid-structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic system leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib-Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.

  5. Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandhu, Rimple; Poirel, Dominique; Pettit, Chris

    2016-07-01

    A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid–structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic systemmore » leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib–Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.« less

  6. Elastic K-means using posterior probability

    PubMed Central

    Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris

    2017-01-01

    The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model. PMID:29240756

  7. The use of spatial dose gradients and probability density function to evaluate the effect of internal organ motion for prostate IMRT treatment planning

    NASA Astrophysics Data System (ADS)

    Jiang, Runqing; Barnett, Rob B.; Chow, James C. L.; Chen, Jeff Z. Y.

    2007-03-01

    The aim of this study is to investigate the effects of internal organ motion on IMRT treatment planning of prostate patients using a spatial dose gradient and probability density function. Spatial dose distributions were generated from a Pinnacle3 planning system using a co-planar, five-field intensity modulated radiation therapy (IMRT) technique. Five plans were created for each patient using equally spaced beams but shifting the angular displacement of the beam by 15° increments. Dose profiles taken through the isocentre in anterior-posterior (A-P), right-left (R-L) and superior-inferior (S-I) directions for IMRT plans were analysed by exporting RTOG file data from Pinnacle. The convolution of the 'static' dose distribution D0(x, y, z) and probability density function (PDF), denoted as P(x, y, z), was used to analyse the combined effect of repositioning error and internal organ motion. Organ motion leads to an enlarged beam penumbra. The amount of percentage mean dose deviation (PMDD) depends on the dose gradient and organ motion probability density function. Organ motion dose sensitivity was defined by the rate of change in PMDD with standard deviation of motion PDF and was found to increase with the maximum dose gradient in anterior, posterior, left and right directions. Due to common inferior and superior field borders of the field segments, the sharpest dose gradient will occur in the inferior or both superior and inferior penumbrae. Thus, prostate motion in the S-I direction produces the highest dose difference. The PMDD is within 2.5% when standard deviation is less than 5 mm, but the PMDD is over 2.5% in the inferior direction when standard deviation is higher than 5 mm in the inferior direction. Verification of prostate organ motion in the inferior directions is essential. The margin of the planning target volume (PTV) significantly impacts on the confidence of tumour control probability (TCP) and level of normal tissue complication probability (NTCP). Smaller margins help to reduce the dose to normal tissues, but may compromise the dose coverage of the PTV. Lower rectal NTCP can be achieved by either a smaller margin or a steeper dose gradient between PTV and rectum. With the same DVH control points, the rectum has lower complication in the seven-beam technique used in this study because of the steeper dose gradient between the target volume and rectum. The relationship between dose gradient and rectal complication can be used to evaluate IMRT treatment planning. The dose gradient analysis is a powerful tool to improve IMRT treatment plans and can be used for QA checking of treatment plans for prostate patients.

  8. The use of spatial dose gradients and probability density function to evaluate the effect of internal organ motion for prostate IMRT treatment planning.

    PubMed

    Jiang, Runqing; Barnett, Rob B; Chow, James C L; Chen, Jeff Z Y

    2007-03-07

    The aim of this study is to investigate the effects of internal organ motion on IMRT treatment planning of prostate patients using a spatial dose gradient and probability density function. Spatial dose distributions were generated from a Pinnacle3 planning system using a co-planar, five-field intensity modulated radiation therapy (IMRT) technique. Five plans were created for each patient using equally spaced beams but shifting the angular displacement of the beam by 15 degree increments. Dose profiles taken through the isocentre in anterior-posterior (A-P), right-left (R-L) and superior-inferior (S-I) directions for IMRT plans were analysed by exporting RTOG file data from Pinnacle. The convolution of the 'static' dose distribution D0(x, y, z) and probability density function (PDF), denoted as P(x, y, z), was used to analyse the combined effect of repositioning error and internal organ motion. Organ motion leads to an enlarged beam penumbra. The amount of percentage mean dose deviation (PMDD) depends on the dose gradient and organ motion probability density function. Organ motion dose sensitivity was defined by the rate of change in PMDD with standard deviation of motion PDF and was found to increase with the maximum dose gradient in anterior, posterior, left and right directions. Due to common inferior and superior field borders of the field segments, the sharpest dose gradient will occur in the inferior or both superior and inferior penumbrae. Thus, prostate motion in the S-I direction produces the highest dose difference. The PMDD is within 2.5% when standard deviation is less than 5 mm, but the PMDD is over 2.5% in the inferior direction when standard deviation is higher than 5 mm in the inferior direction. Verification of prostate organ motion in the inferior directions is essential. The margin of the planning target volume (PTV) significantly impacts on the confidence of tumour control probability (TCP) and level of normal tissue complication probability (NTCP). Smaller margins help to reduce the dose to normal tissues, but may compromise the dose coverage of the PTV. Lower rectal NTCP can be achieved by either a smaller margin or a steeper dose gradient between PTV and rectum. With the same DVH control points, the rectum has lower complication in the seven-beam technique used in this study because of the steeper dose gradient between the target volume and rectum. The relationship between dose gradient and rectal complication can be used to evaluate IMRT treatment planning. The dose gradient analysis is a powerful tool to improve IMRT treatment plans and can be used for QA checking of treatment plans for prostate patients.

  9. Inverse Modeling Using Markov Chain Monte Carlo Aided by Adaptive Stochastic Collocation Method with Transformation

    NASA Astrophysics Data System (ADS)

    Zhang, D.; Liao, Q.

    2016-12-01

    The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of computational efficiency.

  10. Maximum predictive power and the superposition principle

    NASA Technical Reports Server (NTRS)

    Summhammer, Johann

    1994-01-01

    In quantum physics the direct observables are probabilities of events. We ask how observed probabilities must be combined to achieve what we call maximum predictive power. According to this concept the accuracy of a prediction must only depend on the number of runs whose data serve as input for the prediction. We transform each probability to an associated variable whose uncertainty interval depends only on the amount of data and strictly decreases with it. We find that for a probability which is a function of two other probabilities maximum predictive power is achieved when linearly summing their associated variables and transforming back to a probability. This recovers the quantum mechanical superposition principle.

  11. Little Bayesians or Little Einsteins? Probability and Explanatory Virtue in Children's Inferences

    ERIC Educational Resources Information Center

    Johnston, Angie M.; Johnson, Samuel G. B.; Koven, Marissa L.; Keil, Frank C.

    2017-01-01

    Like scientists, children seek ways to explain causal systems in the world. But are children scientists in the strict Bayesian tradition of maximizing posterior probability? Or do they attend to other explanatory considerations, as laypeople and scientists--such as Einstein--do? Four experiments support the latter possibility. In particular, we…

  12. Creation of the BMA ensemble for SST using a parallel processing technique

    NASA Astrophysics Data System (ADS)

    Kim, Kwangjin; Lee, Yang Won

    2013-10-01

    Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.

  13. Comparison of sampling techniques for Bayesian parameter estimation

    NASA Astrophysics Data System (ADS)

    Allison, Rupert; Dunkley, Joanna

    2014-02-01

    The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

  14. Efficient Posterior Probability Mapping Using Savage-Dickey Ratios

    PubMed Central

    Penny, William D.; Ridgway, Gerard R.

    2013-01-01

    Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner. PMID:23533640

  15. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  16. The Valgus Inclination of the Tibial Component Increases the Risk of Medial Tibial Condylar Fractures in Unicompartmental Knee Arthroplasty.

    PubMed

    Inoue, Shinji; Akagi, Masao; Asada, Shigeki; Mori, Shigeshi; Zaima, Hironori; Hashida, Masahiko

    2016-09-01

    Medial tibial condylar fractures (MTCFs) are a rare but serious complication after unicompartmental knee arthroplasty. Although some surgical pitfalls have been reported for MTCFs, it is not clear whether the varus/valgus tibial inclination contributes to the risk of MTCFs. We constructed a 3-dimensional finite elemental method model of the tibia with a medial component and assessed stress concentrations by changing the inclination from 6° varus to 6° valgus. Subsequently, we repeated the same procedure adding extended sagittal bone cuts of 2° and 10° in the posterior tibial cortex. Furthermore, we calculated the bone volume that supported the tibial component, which is considered to affect stress distribution in the medial tibial condyle. Stress concentrations were observed on the medial tibial metaphyseal cortices and on the anterior and posterior tibial cortices in the corner of cut surfaces in all models; moreover, the maximum principal stresses on the posterior cortex were larger than those on the anterior cortex. The extended sagittal bone cuts in the posterior tibial cortex increased the stresses further at these 3 sites. In the models with a 10° extended sagittal bone cut, the maximum principal stress on the posterior cortex increased as the tibial inclination changed from 6° varus to 6° valgus. The bone volume decreased as the inclination changed from varus to valgus. In this finite element method, the risk of MTCFs increases with increasing valgus inclination of the tibial component and with increased extension of the sagittal cut in the posterior tibial cortex. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Cephalometric risk factors of obstructive sleep apnea.

    PubMed

    Bayat, Mohamad; Shariati, Mahsa; Rakhshan, Vahid; Abbasi, Mohsen; Fateh, Ali; Sobouti, Farhad; Davoudmanesh, Zeinab

    2017-09-01

    Previous studies on risk factors of obstructive sleep apnea (OSA) are highly controversial and mostly identifying a few cephalometric risk factors. OSA diagnosis was made according to the patients' apnea-hypopnea index (AHI). Included were 74 OSA patients (AHI > 10) and 52 control subjects (AHI ≤ 10 + free of other OSA symptoms). In both groups, 18 cephalometric parameters were traced (SNA, SNB, ANB, the soft palate's length (PNS-P), inferior airway space, the distance from the mandibular plane to the hyoid (MP-H), lengths of mandible (Go-Gn) and maxilla (PNS-ANS), vertical height of airway (VAL), vertical height of the posterior maxilla (S-PNS), superior posterior airway space (SPAS), middle airway space, distances from hyoid to third cervical vertebra and retrognathion (HH1), C3 (C3H), and RGN (HRGN), the maximum thickness of soft palate (MPT), tongue length (TGL), and the maximum height of tongue). These parameters were compared using t-test. Significant variables were SPAS (p = 0.027), MPT, TGL, HH1, C3H, HRGN, PNS-P, S-PNS, MP-H, VAL, and Go-Gn (all p values ≤ 0.006). OSA patients exhibited thicker and longer soft palates, hyoid bones more distant from the vertebrae, retrognathion, and mandibular plane, higher posterior maxillae, longer mandibles, and smaller superior-posterior airways.

  18. Mechanism of continence after repair of posterior urethral disruption: evidence of rhabdosphincter activity.

    PubMed

    Whitson, Jared M; McAninch, Jack W; Tanagho, Emil A; Metro, Michael J; Rahman, Nadeem U

    2008-03-01

    Controversy exists regarding continence mechanisms in patients who undergo posterior urethral reconstruction after pelvic fracture. Some evidence suggests that continence after posterior urethroplasty is maintained by the bladder neck or proximal urethral mechanism without a functioning distal mechanism. We studied distal urethral sphincter activity in patients who have undergone posterior urethroplasty for pelvic fracture. A total of 12 patients who had undergone surgical repair of urethral disruption involving the prostatomembranous region underwent videourodynamics with urethral pressure profiles at rest, and during stress and hold maneuvers. Bladder pressure and urethral pressure, including proximal and distal urethral sphincter activity and pressure, were assessed in each patient. All 12 patients had daytime continence of urine postoperatively with a followup after anastomotic urethroplasty of 12 to 242 months (mean 76). Average maximum urethral pressure was 71 cm H2O. Average maximum urethral closure pressure was 61 cm H2O. The average urethral pressure seen during a brief hold maneuver was 111 cm H2O. Average functional sphincteric length was 2.5 cm. Six of the 12 patients had clear evidence of distal urethral sphincter function, as demonstrated by the profile. Continence after anastomotic urethroplasty for posttraumatic urethral strictures is maintained primarily by the proximal bladder neck. However, there is a significant contribution of the rhabdosphincter in many patients.

  19. Bayesian analysis of the astrobiological implications of life’s early emergence on Earth

    PubMed Central

    Spiegel, David S.; Turner, Edwin L.

    2012-01-01

    Life arose on Earth sometime in the first few hundred million years after the young planet had cooled to the point that it could support water-based organisms on its surface. The early emergence of life on Earth has been taken as evidence that the probability of abiogenesis is high, if starting from young Earth-like conditions. We revisit this argument quantitatively in a Bayesian statistical framework. By constructing a simple model of the probability of abiogenesis, we calculate a Bayesian estimate of its posterior probability, given the data that life emerged fairly early in Earth’s history and that, billions of years later, curious creatures noted this fact and considered its implications. We find that, given only this very limited empirical information, the choice of Bayesian prior for the abiogenesis probability parameter has a dominant influence on the computed posterior probability. Although terrestrial life's early emergence provides evidence that life might be abundant in the universe if early-Earth-like conditions are common, the evidence is inconclusive and indeed is consistent with an arbitrarily low intrinsic probability of abiogenesis for plausible uninformative priors. Finding a single case of life arising independently of our lineage (on Earth, elsewhere in the solar system, or on an extrasolar planet) would provide much stronger evidence that abiogenesis is not extremely rare in the universe. PMID:22198766

  20. Bayesian analysis of the astrobiological implications of life's early emergence on Earth.

    PubMed

    Spiegel, David S; Turner, Edwin L

    2012-01-10

    Life arose on Earth sometime in the first few hundred million years after the young planet had cooled to the point that it could support water-based organisms on its surface. The early emergence of life on Earth has been taken as evidence that the probability of abiogenesis is high, if starting from young Earth-like conditions. We revisit this argument quantitatively in a bayesian statistical framework. By constructing a simple model of the probability of abiogenesis, we calculate a bayesian estimate of its posterior probability, given the data that life emerged fairly early in Earth's history and that, billions of years later, curious creatures noted this fact and considered its implications. We find that, given only this very limited empirical information, the choice of bayesian prior for the abiogenesis probability parameter has a dominant influence on the computed posterior probability. Although terrestrial life's early emergence provides evidence that life might be abundant in the universe if early-Earth-like conditions are common, the evidence is inconclusive and indeed is consistent with an arbitrarily low intrinsic probability of abiogenesis for plausible uninformative priors. Finding a single case of life arising independently of our lineage (on Earth, elsewhere in the solar system, or on an extrasolar planet) would provide much stronger evidence that abiogenesis is not extremely rare in the universe.

  1. Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2006-01-01

    This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…

  2. Effects of variability in probable maximum precipitation patterns on flood losses

    NASA Astrophysics Data System (ADS)

    Zischg, Andreas Paul; Felder, Guido; Weingartner, Rolf; Quinn, Niall; Coxon, Gemma; Neal, Jeffrey; Freer, Jim; Bates, Paul

    2018-05-01

    The assessment of the impacts of extreme floods is important for dealing with residual risk, particularly for critical infrastructure management and for insurance purposes. Thus, modelling of the probable maximum flood (PMF) from probable maximum precipitation (PMP) by coupling hydrological and hydraulic models has gained interest in recent years. Herein, we examine whether variability in precipitation patterns exceeds or is below selected uncertainty factors in flood loss estimation and if the flood losses within a river basin are related to the probable maximum discharge at the basin outlet. We developed a model experiment with an ensemble of probable maximum precipitation scenarios created by Monte Carlo simulations. For each rainfall pattern, we computed the flood losses with a model chain and benchmarked the effects of variability in rainfall distribution with other model uncertainties. The results show that flood losses vary considerably within the river basin and depend on the timing and superimposition of the flood peaks from the basin's sub-catchments. In addition to the flood hazard component, the other components of flood risk, exposure, and vulnerability contribute remarkably to the overall variability. This leads to the conclusion that the estimation of the probable maximum expectable flood losses in a river basin should not be based exclusively on the PMF. Consequently, the basin-specific sensitivities to different precipitation patterns and the spatial organization of the settlements within the river basin need to be considered in the analyses of probable maximum flood losses.

  3. Bayesian inference of Earth's radial seismic structure from body-wave traveltimes using neural networks

    NASA Astrophysics Data System (ADS)

    de Wit, Ralph W. L.; Valentine, Andrew P.; Trampert, Jeannot

    2013-10-01

    How do body-wave traveltimes constrain the Earth's radial (1-D) seismic structure? Existing 1-D seismological models underpin 3-D seismic tomography and earthquake location algorithms. It is therefore crucial to assess the quality of such 1-D models, yet quantifying uncertainties in seismological models is challenging and thus often ignored. Ideally, quality assessment should be an integral part of the inverse method. Our aim in this study is twofold: (i) we show how to solve a general Bayesian non-linear inverse problem and quantify model uncertainties, and (ii) we investigate the constraint on spherically symmetric P-wave velocity (VP) structure provided by body-wave traveltimes from the EHB bulletin (phases Pn, P, PP and PKP). Our approach is based on artificial neural networks, which are very common in pattern recognition problems and can be used to approximate an arbitrary function. We use a Mixture Density Network to obtain 1-D marginal posterior probability density functions (pdfs), which provide a quantitative description of our knowledge on the individual Earth parameters. No linearization or model damping is required, which allows us to infer a model which is constrained purely by the data. We present 1-D marginal posterior pdfs for the 22 VP parameters and seven discontinuity depths in our model. P-wave velocities in the inner core, outer core and lower mantle are resolved well, with standard deviations of ˜0.2 to 1 per cent with respect to the mean of the posterior pdfs. The maximum likelihoods of VP are in general similar to the corresponding ak135 values, which lie within one or two standard deviations from the posterior means, thus providing an independent validation of ak135 in this part of the radial model. Conversely, the data contain little or no information on P-wave velocity in the D'' layer, the upper mantle and the homogeneous crustal layers. Further, the data do not constrain the depth of the discontinuities in our model. Using additional phases available in the ISC bulletin, such as PcP, PKKP and the converted phases SP and ScP, may enhance the resolvability of these parameters. Finally, we show how the method can be extended to obtain a posterior pdf for a multidimensional model space. This enables us to investigate correlations between model parameters.

  4. Variable selection models for genomic selection using whole-genome sequence data and singular value decomposition.

    PubMed

    Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen

    2017-12-27

    Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP effects (SNP-BLUP model). When reducing marker density from WGS data to 30 K, SNP-BLUP tended to yield the highest accuracies, at least in the short term. Based on SVD of the genotype matrix, we developed a direct method for the calculation of BayesC estimates of marker effects. Although SVD- and MCMC-based marker effects differed slightly, their prediction accuracies were similar. Assuming that the SVD of the marker genotype matrix is already performed for other reasons (e.g. for SNP-BLUP), computation times for the BayesC predictions were comparable to those of SNP-BLUP.

  5. Optimal spatio-temporal design of water quality monitoring networks for reservoirs: Application of the concept of value of information

    NASA Astrophysics Data System (ADS)

    Maymandi, Nahal; Kerachian, Reza; Nikoo, Mohammad Reza

    2018-03-01

    This paper presents a new methodology for optimizing Water Quality Monitoring (WQM) networks of reservoirs and lakes using the concept of the value of information (VOI) and utilizing results of a calibrated numerical water quality simulation model. With reference to the value of information theory, water quality of every checkpoint with a specific prior probability differs in time. After analyzing water quality samples taken from potential monitoring points, the posterior probabilities are updated using the Baye's theorem, and VOI of the samples is calculated. In the next step, the stations with maximum VOI is selected as optimal stations. This process is repeated for each sampling interval to obtain optimal monitoring network locations for each interval. The results of the proposed VOI-based methodology is compared with those obtained using an entropy theoretic approach. As the results of the two methodologies would be partially different, in the next step, the results are combined using a weighting method. Finally, the optimal sampling interval and location of WQM stations are chosen using the Evidential Reasoning (ER) decision making method. The efficiency and applicability of the methodology are evaluated using available water quantity and quality data of the Karkheh Reservoir in the southwestern part of Iran.

  6. Age-Related Variability in Tongue Pressure Patterns for Maximum Isometric and Saliva Swallowing Tasks

    PubMed Central

    Peladeau-Pigeon, Melanie

    2017-01-01

    Purpose The ability to generate tongue pressure plays a major role in bolus transport in swallowing. In studies of motor control, stability or variability of movement is a feature that changes with age, disease, task complexity, and perturbation. In this study, we explored whether age and tongue strength influence the stability of the tongue pressure generation pattern during isometric and swallowing tasks in healthy volunteers. Method Tongue pressure data, collected using the Iowa Oral Performance Instrument, were analyzed from 84 participants in sex-balanced and decade age-group strata. Tasks included maximum anterior and posterior isometric pressures and regular-effort saliva swallows. The cyclic spatiotemporal index (cSTI) was used to capture stability (vs. variability) in patterns of pressure generation. Mixed-model repeated measures analyses of covariance were performed separately for each task (anterior and posterior isometric pressures, saliva swallows) with between-participant factors of age group and sex, a within-participant factor of task repetition, and a continuous covariate of tongue strength. Results Neither age group nor sex effects were found. There was no significant relationship between tongue strength and the cSTI on the anterior isometric tongue pressure task (r = −.11). For the posterior isometric tongue pressure task, a significant negative correlation (r = −.395) was found between tongue strength and the cSTI. The opposite pattern of a significant positive correlation (r = .29) between tongue strength and the cSTI was seen for the saliva swallow task. Conclusions Tongue pressure generation patterns appear highly stable across repeated maximum isometric and saliva swallow tasks, despite advancing age. Greater pattern variability is seen with weaker posterior isometric pressures. Overall, saliva swallows had the lowest pressure amplitudes and highest pressure pattern variability as measured by the cSTI. PMID:29114767

  7. The known unknowns: neural representation of second-order uncertainty, and ambiguity

    PubMed Central

    Bach, Dominik R.; Hulme, Oliver; Penny, William D.; Dolan, Raymond J.

    2011-01-01

    Predictions provided by action-outcome probabilities entail a degree of (first-order) uncertainty. However, these probabilities themselves can be imprecise and embody second-order uncertainty. Tracking second-order uncertainty is important for optimal decision making and reinforcement learning. Previous functional magnetic resonance imaging investigations of second-order uncertainty in humans have drawn on an economic concept of ambiguity, where action-outcome associations in a gamble are either known (unambiguous) or completely unknown (ambiguous). Here, we relaxed the constraints associated with a purely categorical concept of ambiguity and varied the second-order uncertainty of gambles continuously, quantified as entropy over second-order probabilities. We show that second-order uncertainty influences decisions in a pessimistic way by biasing second-order probabilities, and that second-order uncertainty is negatively correlated with posterior cingulate cortex activity. The category of ambiguous (compared to non-ambiguous) gambles also biased choice in a similar direction, but was associated with distinct activation of a posterior parietal cortical area; an activation that we show reflects a different computational mechanism. Our findings indicate that behavioural and neural responses to second-order uncertainty are distinct from those associated with ambiguity and may call for a reappraisal of previous data. PMID:21451019

  8. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  9. Application of maximum entropy to statistical inference for inversion of data from a single track segment.

    PubMed

    Stotts, Steven A; Koch, Robert A

    2017-08-01

    In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.

  10. Probability analysis for consecutive-day maximum rainfall for Tiruchirapalli City (south India, Asia)

    NASA Astrophysics Data System (ADS)

    Sabarish, R. Mani; Narasimhan, R.; Chandhru, A. R.; Suribabu, C. R.; Sudharsan, J.; Nithiyanantham, S.

    2017-05-01

    In the design of irrigation and other hydraulic structures, evaluating the magnitude of extreme rainfall for a specific probability of occurrence is of much importance. The capacity of such structures is usually designed to cater to the probability of occurrence of extreme rainfall during its lifetime. In this study, an extreme value analysis of rainfall for Tiruchirapalli City in Tamil Nadu was carried out using 100 years of rainfall data. Statistical methods were used in the analysis. The best-fit probability distribution was evaluated for 1, 2, 3, 4 and 5 days of continuous maximum rainfall. The goodness of fit was evaluated using Chi-square test. The results of the goodness-of-fit tests indicate that log-Pearson type III method is the overall best-fit probability distribution for 1-day maximum rainfall and consecutive 2-, 3-, 4-, 5- and 6-day maximum rainfall series of Tiruchirapalli. To be reliable, the forecasted maximum rainfalls for the selected return periods are evaluated in comparison with the results of the plotting position.

  11. Aerosol-type retrieval and uncertainty quantification from OMI data

    NASA Astrophysics Data System (ADS)

    Kauppi, Anu; Kolmonen, Pekka; Laine, Marko; Tamminen, Johanna

    2017-11-01

    We discuss uncertainty quantification for aerosol-type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses precalculated aerosol microphysical models stored in look-up tables (LUTs) and top-of-atmosphere (TOA) spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modelled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval.This paper focuses on the aerosol microphysical model selection and characterisation of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD). The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach, in which all uncertainties are described as a posterior probability distribution. When there is no single best-matching aerosol microphysical model, we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best-fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best-matching models of a certain main aerosol type in order to quantify how plausible it is that it represents the underlying atmospheric aerosol conditions.The developed method is applied to Ozone Monitoring Instrument (OMI) measurements using a multiwavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate the robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the difficulty in model selection. The posterior probability distribution can provide a comprehensive characterisation of the uncertainty in this kind of problem for aerosol-type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.

  12. Fast Bayesian approach for modal identification using free vibration data, Part I - Most probable value

    NASA Astrophysics Data System (ADS)

    Zhang, Feng-Liang; Ni, Yan-Chun; Au, Siu-Kui; Lam, Heung-Fai

    2016-03-01

    The identification of modal properties from field testing of civil engineering structures is becoming economically viable, thanks to the advent of modern sensor and data acquisition technology. Its demand is driven by innovative structural designs and increased performance requirements of dynamic-prone structures that call for a close cross-checking or monitoring of their dynamic properties and responses. Existing instrumentation capabilities and modal identification techniques allow structures to be tested under free vibration, forced vibration (known input) or ambient vibration (unknown broadband loading). These tests can be considered complementary rather than competing as they are based on different modeling assumptions in the identification model and have different implications on costs and benefits. Uncertainty arises naturally in the dynamic testing of structures due to measurement noise, sensor alignment error, modeling error, etc. This is especially relevant in field vibration tests because the test condition in the field environment can hardly be controlled. In this work, a Bayesian statistical approach is developed for modal identification using the free vibration response of structures. A frequency domain formulation is proposed that makes statistical inference based on the Fast Fourier Transform (FFT) of the data in a selected frequency band. This significantly simplifies the identification model because only the modes dominating the frequency band need to be included. It also legitimately ignores the information in the excluded frequency bands that are either irrelevant or difficult to model, thereby significantly reducing modeling error risk. The posterior probability density function (PDF) of the modal parameters is derived rigorously from modeling assumptions and Bayesian probability logic. Computational difficulties associated with calculating the posterior statistics, including the most probable value (MPV) and the posterior covariance matrix, are addressed. Fast computational algorithms for determining the MPV are proposed so that the method can be practically implemented. In the companion paper (Part II), analytical formulae are derived for the posterior covariance matrix so that it can be evaluated without resorting to finite difference method. The proposed method is verified using synthetic data. It is also applied to modal identification of full-scale field structures.

  13. The maximum growth rate of life on Earth

    NASA Astrophysics Data System (ADS)

    Corkrey, Ross; McMeekin, Tom A.; Bowman, John P.; Olley, June; Ratkowsky, David

    2018-01-01

    Life on Earth spans a range of temperatures and exhibits biological growth rates that are temperature dependent. While the observation that growth rates are temperature dependent is well known, we have recently shown that the statistical distribution of specific growth rates for life on Earth is a function of temperature (Corkrey et al., 2016). The maximum rates of growth of all life have a distinct limit, even when grown under optimal conditions, and which vary predictably with temperature. We term this distribution of growth rates the biokinetic spectrum for temperature (BKST). The BKST possibly arises from a trade-off between catalytic activity and stability of enzymes involved in a rate-limiting Master Reaction System (MRS) within the cell. We develop a method to extrapolate quantile curves for the BKST to obtain the posterior probability of the maximum rate of growth of any form of life on Earth. The maximum rate curve conforms to the observed data except below 0°C and above 100°C where the predicted value may be positively biased. The deviation below 0°C may arise from the bulk properties of water, while the degradation of biomolecules may be important above 100°C. The BKST has potential application in astrobiology by providing an estimate of the maximum possible growth rate attainable by terrestrial life and perhaps life elsewhere. We suggest that the area under the maximum growth rate curve and the peak rate may be useful characteristics in considerations of habitability. The BKST can serve as a diagnostic for unusual life, such as second biogenesis or non-terrestrial life. Since the MRS must have been heavily conserved the BKST may contain evolutionary relics. The BKST can serve as a signature summarizing the nature of life in environments beyond Earth, or to characterize species arising from a second biogenesis on Earth.

  14. Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign

    PubMed Central

    2007-01-01

    Background Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds) that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for pairwise RNA structure prediction methods in a principled fashion. These constraints can reduce the computational and memory requirements of these methods while maintaining or improving their accuracy of structural prediction. This extends the practical reach of these methods to longer length sequences. The revised Dynalign code is freely available for download. PMID:17445273

  15. Influence of Femoral Component Design on Retrograde Femoral Nail Starting Point.

    PubMed

    Service, Benjamin C; Kang, William; Turnbull, Nathan; Langford, Joshua; Haidukewych, George; Koval, Kenneth J

    2015-10-01

    Our experience with retrograde femoral nailing after periprosthetic distal femur fractures was that femoral components with deep trochlear grooves posteriorly displace the nail entry point resulting in recurvatum deformity. This study evaluated the influence of distal femoral prosthetic design on the starting point. One hundred lateral knee images were examined. The distal edge of Blumensaat's line was used to create a ratio of its location compared with the maximum anteroposterior condylar width called the starting point ratio (SPR). Femoral trials from 6 manufacturers were analyzed to determine the location of simulated nail position in the sagittal plane compared with the maximum anteroposterior prosthetic width. These measurements were used to create a ratio, the femoral component ratio (FCR). The FCR was compared with the SPR to determine if a femoral component would be at risk for retrograde nail starting point posterior to the Blumensaat's line. The mean SPR was 0.392 ± 0.03, and the mean FCR was 0.416 ± 0.05, which was significantly greater (P = 0.003). The mean FCR was 0.444 ± 0.06 for the cruciate retaining (CR) trials and was 0.393 ± 0.04 for the posterior stabilized trials; this difference was significant (P < 0.001). The FCR for the femoral trials studied was significantly greater than the SPR for native knees and was significantly greater for CR femoral components compared with posterior stabilized components. These findings demonstrate that many total knee prostheses, particularly CR designs, are at risk for a starting point posterior to Blumensaat's line.

  16. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    PubMed

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  17. Image segmentation using hidden Markov Gauss mixture models.

    PubMed

    Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M

    2007-07-01

    Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.

  18. The phylogenetic relationships of known mosquito (Diptera: Culicidae) mitogenomes.

    PubMed

    Chu, Hongliang; Li, Chunxiao; Guo, Xiaoxia; Zhang, Hengduan; Luo, Peng; Wu, Zhonghua; Wang, Gang; Zhao, Tongyan

    2018-01-01

    The known mosquito mitogenomes, containing a total of 34 species, which belong to five genera, were collected from GenBank, and the practicality and effectiveness of the variation in the complete mitochondrial DNA genome and portions of mitochondrial COI gene were assessed to reconstruct the phylogeny of mosquitoes. Phylogenetic trees were reconstructed on the basis of parsimony, maximum likelihood, and Bayesian (BI) methods. It is concluded that: (1) Both mitogenomes and COI gene support the monophly of following taxa: Subgenus Nyssorhynchus, Subgenus Cellia, Anopheles albitarsis complex, Anopheles gambiae complex, and Anopheles punctulatus group; (2) Genus Aedes is not monophyletic relative to Ochlerotatus vigilax; (3) The mitogenome results indicate a close relationship between Anopheles epiroticus and Anopheles gambiae complex, Anopheles dirus complex and Anopheles punctulatus group, respectively; (4) The Bayesian posterior probability (BPP) within phylogenetic tree reconstructed by mitogenomes is higher than COI tree. The results show that phylogenetic relationships reconstructed using the mitogenomes were more similar to those based on morphological data.

  19. Inferring probabilistic stellar rotation periods using Gaussian processes

    NASA Astrophysics Data System (ADS)

    Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh

    2018-02-01

    Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.

  20. Bayesian enhancement two-stage design for single-arm phase II clinical trials with binary and time-to-event endpoints.

    PubMed

    Shi, Haolun; Yin, Guosheng

    2018-02-21

    Simon's two-stage design is one of the most commonly used methods in phase II clinical trials with binary endpoints. The design tests the null hypothesis that the response rate is less than an uninteresting level, versus the alternative hypothesis that the response rate is greater than a desirable target level. From a Bayesian perspective, we compute the posterior probabilities of the null and alternative hypotheses given that a promising result is declared in Simon's design. Our study reveals that because the frequentist hypothesis testing framework places its focus on the null hypothesis, a potentially efficacious treatment identified by rejecting the null under Simon's design could have only less than 10% posterior probability of attaining the desirable target level. Due to the indifference region between the null and alternative, rejecting the null does not necessarily mean that the drug achieves the desirable response level. To clarify such ambiguity, we propose a Bayesian enhancement two-stage (BET) design, which guarantees a high posterior probability of the response rate reaching the target level, while allowing for early termination and sample size saving in case that the drug's response rate is smaller than the clinically uninteresting level. Moreover, the BET design can be naturally adapted to accommodate survival endpoints. We conduct extensive simulation studies to examine the empirical performance of our design and present two trial examples as applications. © 2018, The International Biometric Society.

  1. Reversible posterior leucoencephalopathy syndrome associated with bone marrow transplantation.

    PubMed

    Teive, H A; Brandi, I V; Camargo, C H; Bittencourt, M A; Bonfim, C M; Friedrich, M L; de Medeiros, C R; Werneck, L C; Pasquini, R

    2001-09-01

    Reversible posterior leucoencephalopathy syndrome (RPLS) has previously been described in patients who have renal insufficiency, eclampsia, hypertensive encephalopathy and patients receiving immunosuppressive therapy. The mechanism by which immunosuppressive agents can cause this syndrome is not clear, but it is probably related with cytotoxic effects of these agents on the vascular endothelium. We report eight patients who received cyclosporine A (CSA) after allogeneic bone marrow transplantation or as treatment for severe aplastic anemia (SSA) who developed posterior leucoencephalopathy. The most common signs and symptoms were seizures and headache. Neurological dysfunction occurred preceded by or concomitant with high blood pressure and some degree of acute renal failure in six patients. Computerized tomography studies showed low-density white matter lesions involving the posterior areas of cerebral hemispheres. Symptoms and neuroimaging abnormalities were reversible and improvement occurred in all patients when given lower doses of CSA or when the drug was withdrawn. RPLS may be considered an expression of CSA neurotoxicity.

  2. Posterior semicircular canal dehiscence: value of VEMP and multidetector CT.

    PubMed

    Vanspauwen, R; Salembier, L; Van den Hauwe, L; Parizel, P; Wuyts, F L; Van de Heyning, P H

    2006-01-01

    To illustrate that posterior semicircular canal dehiscence can present similarly to superior semicircular canal dehiscence. The symptomatology initially presented as probable Menière's disease evolving into a mixed conductive hearing loss with a Carhart notch-type perceptive component suggestive of otosclerosis-type stapes fixation. A small hole stapedotomy resulted in a dead ear and a horizontal semicircular canal hypofunction. Recurrent incapacitating vertigo attacks developed. Vestibular evoked myogenic potential (VEMP) testing demonstrated intact vestibulocollic reflexes. Additional evaluation with high resolution multidetector computed tomography (MDCT) of the temporal bone showed a dehiscence of the left posterior semicircular canal. Besides superior semicircular canal dehiscence, posterior semicircular canal dehiscence has to be included in the differential diagnosis of atypical Menière's disease and/or low tone conductive hearing loss. The value of performing MDCT before otosclerosis-type surgery is stressed. VEMP might contribute to establishing the differential diagnosis.

  3. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  4. Modulation of cognitive control levels via manipulation of saccade trial-type probability assessed with event-related BOLD fMRI.

    PubMed

    Pierce, Jordan E; McDowell, Jennifer E

    2016-02-01

    Cognitive control supports flexible behavior adapted to meet current goals and can be modeled through investigation of saccade tasks with varying cognitive demands. Basic prosaccades (rapid glances toward a newly appearing stimulus) are supported by neural circuitry, including occipital and posterior parietal cortex, frontal and supplementary eye fields, and basal ganglia. These trials can be contrasted with complex antisaccades (glances toward the mirror image location of a stimulus), which are characterized by greater functional magnetic resonance imaging (MRI) blood oxygenation level-dependent (BOLD) signal in the aforementioned regions and recruitment of additional regions such as dorsolateral prefrontal cortex. The current study manipulated the cognitive demands of these saccade tasks by presenting three rapid event-related runs of mixed saccades with a varying probability of antisaccade vs. prosaccade trials (25, 50, or 75%). Behavioral results showed an effect of trial-type probability on reaction time, with slower responses in runs with a high antisaccade probability. Imaging results exhibited an effect of probability in bilateral pre- and postcentral gyrus, bilateral superior temporal gyrus, and medial frontal gyrus. Additionally, the interaction between saccade trial type and probability revealed a strong probability effect for prosaccade trials, showing a linear increase in activation parallel to antisaccade probability in bilateral temporal/occipital, posterior parietal, medial frontal, and lateral prefrontal cortex. In contrast, antisaccade trials showed elevated activation across all runs. Overall, this study demonstrated that improbable performance of a typically simple prosaccade task led to augmented BOLD signal to support changing cognitive control demands, resulting in activation levels similar to the more complex antisaccade task. Copyright © 2016 the American Physiological Society.

  5. Effect of partial meniscectomy at the medial posterior horn on tibiofemoral contact mechanics and meniscal hoop strains in human knees.

    PubMed

    Seitz, Andreas Martin; Lubomierski, Anja; Friemert, Benedikt; Ignatius, Anita; Dürselen, Lutz

    2012-06-01

    We examined the influence of partial meniscectomy of 10 mm width on 10 human cadaveric knee joints, as it is performed during the treatment of radial tears in the posterior horn of the medial meniscus, on maximum contact pressure, contact area (CA), and meniscal hoop strain in the lateral and medial knee compartments. In case of 0° and 30° flexion angle, 20% and 50% partial meniscectomy did not influence maximum contact pressure and area. Only in case of 60° knee flexion, 50% partial resection increased medial maximum contact pressure and decreased the medial CA statistically significant. However, 100% partial resection increased maximum contact pressure and decreased CA significantly in the meniscectomized medial knee compartment in all tested knee positions. No significant differences were noted for meniscal hoop strain. From a biomechanical point of view, our in vitro study suggests that the medial joint compartment is not in danger of accelerated cartilage degeneration up to a resection limit of 20% meniscal depth and 10 mm width. Contact mechanics are likely to be more sensitive to partial meniscectomy at higher flexion angles, which has to be further investigated. Copyright © 2011 Orthopaedic Research Society.

  6. The estimation of lower refractivity uncertainty from radar sea clutter using the Bayesian—MCMC method

    NASA Astrophysics Data System (ADS)

    Sheng, Zheng

    2013-02-01

    The estimation of lower atmospheric refractivity from radar sea clutter (RFC) is a complicated nonlinear optimization problem. This paper deals with the RFC problem in a Bayesian framework. It uses the unbiased Markov Chain Monte Carlo (MCMC) sampling technique, which can provide accurate posterior probability distributions of the estimated refractivity parameters by using an electromagnetic split-step fast Fourier transform terrain parabolic equation propagation model within a Bayesian inversion framework. In contrast to the global optimization algorithm, the Bayesian—MCMC can obtain not only the approximate solutions, but also the probability distributions of the solutions, that is, uncertainty analyses of solutions. The Bayesian—MCMC algorithm is implemented on the simulation radar sea-clutter data and the real radar sea-clutter data. Reference data are assumed to be simulation data and refractivity profiles are obtained using a helicopter. The inversion algorithm is assessed (i) by comparing the estimated refractivity profiles from the assumed simulation and the helicopter sounding data; (ii) the one-dimensional (1D) and two-dimensional (2D) posterior probability distribution of solutions.

  7. Downregulation of the posterior medial frontal cortex prevents social conformity.

    PubMed

    Klucharev, Vasily; Munneke, Moniek A M; Smidts, Ale; Fernández, Guillén

    2011-08-17

    We often change our behavior to conform to real or imagined group pressure. Social influence on our behavior has been extensively studied in social psychology, but its neural mechanisms have remained largely unknown. Here we demonstrate that the transient downregulation of the posterior medial frontal cortex by theta-burst transcranial magnetic stimulation reduces conformity, as indicated by reduced conformal adjustments in line with group opinion. Both the extent and probability of conformal behavioral adjustments decreased significantly relative to a sham and a control stimulation over another brain area. The posterior part of the medial frontal cortex has previously been implicated in behavioral and attitudinal adjustments. Here, we provide the first interventional evidence of its critical role in social influence on human behavior.

  8. Spectral likelihood expansions for Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nagel, Joseph B.; Sudret, Bruno

    2016-03-01

    A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

  9. Digital map of posterior cerebral artery infarcts associated with posterior cerebral artery trunk and branch occlusion.

    PubMed

    Phan, Thanh G; Fong, Ashley C; Donnan, Geoffrey; Reutens, David C

    2007-06-01

    Knowledge of the extent and distribution of infarcts of the posterior cerebral artery (PCA) may give insight into the limits of the arterial territory and infarct mechanism. We describe the creation of a digital atlas of PCA infarcts associated with PCA branch and trunk occlusion by magnetic resonance imaging techniques. Infarcts were manually segmented on T(2)-weighted magnetic resonance images obtained >24 hours after stroke onset. The images were linearly registered into a common stereotaxic coordinate space. The segmented images were averaged to yield the probability of involvement by infarction at each voxel. Comparisons were made with existing maps of the PCA territory. Thirty patients with a median age of 61 years (range, 22 to 86 years) were studied. In the digital atlas of the PCA, the highest frequency of infarction was within the medial temporal lobe and lingual gyrus (probability=0.60 to 0.70). The mean and maximal PCA infarct volumes were 55.1 and 128.9 cm(3), respectively. Comparison with published maps showed greater agreement in the anterior and medial boundaries of the PCA territory compared with its posterior and lateral boundaries. We have created a probabilistic digital atlas of the PCA based on subacute magnetic resonance scans. This approach is useful for establishing the spatial distribution of strokes in a given cerebral arterial territory and determining the regions within the arterial territory that are at greatest risk of infarction.

  10. [Effect of the number and inclination of implant on stress distribution for mandibular full-arch fixed prosthesis].

    PubMed

    Zheng, Xiaoying; Li, Xiaomei; Tang, Zhen; Gong, Lulu; Wang, Dalin

    2014-06-01

    To study the effect of implant number and inclination on stress distribution in implant and its surrounding bone with three-dimensional finite element analysis. A special denture was made for an edentulous mandible cast to collect three-dimensional finite element data. Three three-dimensional finite element models were established as follows. Model 1: 6 paralleled implants; model 2: 4 paralleled implants; model 3: 4 implants, the two anterior implants were parallel, the two distal implants were tilted 30° distally. Among the three models, the maximum stress values found in anterior implants, posterior implants, and peri-implant bone were modle 3

  11. Uncertainty quantification of voice signal production mechanical model and experimental updating

    NASA Astrophysics Data System (ADS)

    Cataldo, E.; Soize, C.; Sampaio, R.

    2013-11-01

    The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.

  12. Bayesian seismic tomography by parallel interacting Markov chains

    NASA Astrophysics Data System (ADS)

    Gesret, Alexandrine; Bottero, Alexis; Romary, Thomas; Noble, Mark; Desassis, Nicolas

    2014-05-01

    The velocity field estimated by first arrival traveltime tomography is commonly used as a starting point for further seismological, mineralogical, tectonic or similar analysis. In order to interpret quantitatively the results, the tomography uncertainty values as well as their spatial distribution are required. The estimated velocity model is obtained through inverse modeling by minimizing an objective function that compares observed and computed traveltimes. This step is often performed by gradient-based optimization algorithms. The major drawback of such local optimization schemes, beyond the possibility of being trapped in a local minimum, is that they do not account for the multiple possible solutions of the inverse problem. They are therefore unable to assess the uncertainties linked to the solution. Within a Bayesian (probabilistic) framework, solving the tomography inverse problem aims at estimating the posterior probability density function of velocity model using a global sampling algorithm. Markov chains Monte-Carlo (MCMC) methods are known to produce samples of virtually any distribution. In such a Bayesian inversion, the total number of simulations we can afford is highly related to the computational cost of the forward model. Although fast algorithms have been recently developed for computing first arrival traveltimes of seismic waves, the complete browsing of the posterior distribution of velocity model is hardly performed, especially when it is high dimensional and/or multimodal. In the latter case, the chain may even stay stuck in one of the modes. In order to improve the mixing properties of classical single MCMC, we propose to make interact several Markov chains at different temperatures. This method can make efficient use of large CPU clusters, without increasing the global computational cost with respect to classical MCMC and is therefore particularly suited for Bayesian inversion. The exchanges between the chains allow a precise sampling of the high probability zones of the model space while avoiding the chains to end stuck in a probability maximum. This approach supplies thus a robust way to analyze the tomography imaging uncertainties. The interacting MCMC approach is illustrated on two synthetic examples of tomography of calibration shots such as encountered in induced microseismic studies. On the second application, a wavelet based model parameterization is presented that allows to significantly reduce the dimension of the problem, making thus the algorithm efficient even for a complex velocity model.

  13. In vivo determination of total knee arthroplasty kinematics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Komistek, Richard D; Mahfouz, Mohamed R; Bertin, Kim

    2008-01-01

    The objective of this study was to determine if consistent posterior femoral rollback of an asymmetrical posterior cruciate retaining (PCR) total knee arthroplasty was mostly influenced by the implant design, surgical technique, or presence of a well-functioning posterior cruciate ligament (PCL). Three-dimensional femorotibial kinematics was determined for 80 subjects implanted by 3 surgeons, and each subject was evaluated under fluoroscopic surveillance during a deep knee bend. All subjects in this present study having an intact PCL had a well-functioning PCR knee and experienced normal kinematic patterns, although less in magnitude than the normal knee. In addition, a surprising finding wasmore » that, on average, subjects without a PCL still achieved posterior femoral rollback from full extension to maximum knee flexion. The findings in this study revealed that implant design did contribute to the normal kinematics demonstrated by subjects having this asymmetrical PCR total knee arthroplasty.« less

  14. Neural Mechanisms for Integrating Prior Knowledge and Likelihood in Value-Based Probabilistic Inference

    PubMed Central

    Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.

    2015-01-01

    In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152

  15. Kinematic analysis of anterior cruciate ligament reconstruction in total knee arthroplasty

    PubMed Central

    Liu, Hua-Wei; Ni, Ming; Zhang, Guo-Qiang; Li, Xiang; Chen, Hui; Zhang, Qiang; Chai, Wei; Zhou, Yong-Gang; Chen, Ji-Ying; Liu, Yu-Liang; Cheng, Cheng-Kung; Wang, Yan

    2016-01-01

    Background: This study aims to retain normal knee kinematics after knee replacement surgeries by reconstructing anterior cruciate ligament during total knee arthroplasty. Method: We use computational simulation tools to establish four dynamic knee models, including normal knee model, posterior cruciate ligament retaining knee model, posterior cruciate ligament substituting knee model, and anterior cruciate ligament reconstructing knee model. Our proposed method utilizes magnetic resonance images to reconstruct solid bones and attachments of ligaments, and assemble femoral and tibial components according representative literatures and operational specifications. Dynamic data of axial tibial rotation and femoral translation from full-extension to 135 were measured for analyzing the motion of knee models. Findings: The computational simulation results show that comparing with the posterior cruciate ligament retained knee model and the posterior cruciate ligament substituted knee model, reconstructing anterior cruciate ligament improves the posterior movement of the lateral condyle, medial condyle and tibial internal rotation through a full range of flexion. The maximum posterior translations of the lateral condyle, medial condyle and tibial internal rotation of the anterior cruciate ligament reconstructed knee are 15.3 mm, 4.6 mm and 20.6 at 135 of flexion. Interpretation: Reconstructing anterior cruciate ligament in total knee arthroplasty has been approved to be an more efficient way of maintaining normal knee kinematics comparing to posterior cruciate ligament retained and posterior cruciate ligament substituted total knee arthroplasty. PMID:27347334

  16. Bayesian Estimation of Small Effects in Exercise and Sports Science.

    PubMed

    Mengersen, Kerrie L; Drovandi, Christopher C; Robert, Christian P; Pyne, David B; Gore, Christopher J

    2016-01-01

    The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a 'magnitude-based inference' approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.

  17. Retinoic acid signaling acts via Hox1 to establish the posterior limit of the pharynx in the chordate amphioxus.

    PubMed

    Schubert, Michael; Yu, Jr-Kai; Holland, Nicholas D; Escriva, Hector; Laudet, Vincent; Holland, Linda Z

    2005-01-01

    In the invertebrate chordate amphioxus, as in vertebrates, retinoic acid (RA) specifies position along the anterior/posterior axis with elevated RA signaling in the middle third of the endoderm setting the posterior limit of the pharynx. Here we show that AmphiHox1 is also expressed in the middle third of the developing amphioxus endoderm and is activated by RA signaling. Knockdown of AmphiHox1 function with an antisense morpholino oligonucleotide shows that AmphiHox1 mediates the role of RA signaling in setting the posterior limit of the pharynx by repressing expression of pharyngeal markers in the posterior foregut/midgut endoderm. The spatiotemporal expression of these endodermal genes in embryos treated with RA or the RA antagonist BMS009 indicates that Pax1/9, Pitx and Notch are probably more upstream than Otx and Nodal in the hierarchy of genes repressed by RA signaling. This work highlights the potential of amphioxus, a genomically simple, vertebrate-like invertebrate chordate, as a paradigm for understanding gene hierarchies similar to the more complex ones of vertebrates.

  18. Annular and central heavy pigment deposition on the posterior lens capsule in the pigment dispersion syndrome: pigment deposition on the posterior lens capsule in the pigment dispersion syndrome.

    PubMed

    Turgut, Burak; Türkçüoğlu, Peykan; Deniz, Nurettin; Catak, Onur

    2008-12-01

    To report annular and central heavy pigment deposition on the posterior lens capsule in a case of pigment dispersion syndrome. Case report. A 36-year-old female with bilateral pigment dispersion syndrome presented with progressive decrease in visual acuity in the right eye over the past 1-2 years. Clinical examination revealed the typical findings of pigment dispersion syndrome including bilateral Krunkenberg spindles, iris transillumination defects, and dense trabecular meshwork pigmentation. Remarkably, annular and central dense pigmentation of the posterior lens capsule was noted in the right eye. Annular pigment deposition on the posterior lens capsule may be a rare finding associated with pigment dispersion syndrome. Such a finding suggests that there may be aqueous flow into the retrolental space in some patients with this condition. The way of central pigmentation is the entrance of aqueous to Berger's space. In our case, it is probable that spontaneous detachment of the anterior hyaloid membrane aided this entrance.

  19. Preservation of the articular capsule and short lateral rotator in direct anterior approach to total hip arthroplasty.

    PubMed

    Kanda, Akio; Kaneko, Kazuo; Obayashi, Osamu; Mogami, Atsuhiko; Morohashi, Itaru

    2018-03-09

    In total hip arthroplasty via a direct anterior approach, the femur must be elevated at the time of femoral implant placement. For adequate elevation, division of the posterior soft tissues is necessary. However, if we damage and separate the posterior muscle tissue, we lose the benefits of the intermuscular approach. Furthermore, damage to the posterior soft tissue can result in posterior dislocation. We investigate that protecting the posterior soft tissue increases the joint stability in the early postoperative period and results in a lower dislocation rate. We evaluated muscle strength recovery by measuring the maximum width of the internal obturator muscle on CT images (GE-Healthcare Discovery CT 750HD). We compared the maximum width of the muscle belly preoperatively versus 10 days and 6 months postoperatively. As clinical evaluations, we also investigated the range of motion of the hip joint, hip joint function based on the Japanese Orthopaedic Association hip score (JOA score), and the dislocation rate 6 months after surgery. The width of the internal obturator muscle increased significantly from 15.1 ± 3.1 mm before surgery to 16.4 ± 2.8 mm 6 months after surgery. The JOA score improved significantly from 50.8 ± 15.1 points to 95.6 ± 7.6 points. No dislocations occurred in this study. We cut only the posterosuperior articular capsule and protected the internal obturator muscle to preserve muscle strength. We repaired the entire posterosuperior and anterior articular capsule. These treatments increase joint stability in the early postoperative period, thus reducing the dislocation rate. Therapeutic, Level IV.

  20. Bayesian Inference for Generalized Linear Models for Spiking Neurons

    PubMed Central

    Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias

    2010-01-01

    Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627

  1. Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model

    PubMed Central

    Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070

  2. Joint segmentation and deformable registration of brain scans guided by a tumor growth model.

    PubMed

    Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.

  3. Empty sella syndrome secondary to intrasellar cyst in adolescence.

    PubMed

    Raiti, S; Albrink, M J; Maclaren, N K; Chadduck, W M; Gabriele, O F; Chou, S M

    1976-09-01

    A 15-year-old boy had growth failure and failure of sexual development. The probable onset was at age 10. Endocrine studies showed hypopituitarism with deficiency of growth hormone and follicle-stimulating hormone, an abnormal response to metyrapone, and deficiency of thyroid function. Luteinizing hormone level was in the low-normal range. Posterior pituitary function was normal. Roentgenogram showed a large sella with some destruction of the posterior clinoids. Transsphenoidal exploration was carried out. The sella was empty except for a whitish membrane; no pituitary tissue was seen. The sella was packed with muscle. Recovery was uneventful, and the patient was given replacement therapy. On histologic examination,the cyst wall showed low pseudostratified cuboidal epithelium and occasional squamous metaplasia. Hemosiderin-filled phagocytes and acinar structures were also seen. The diagnosis was probable rupture of an intrasellar epithelial cyst, leading to empty sella syndrome.

  4. Log-Linear Models for Gene Association

    PubMed Central

    Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.

    2009-01-01

    We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032

  5. Posterior Shift of Contact Point between Femoral Component and Polyethylene in the LCS Rotating Platform Implant under Weight Bearing Condition.

    PubMed

    Oh, Won Seok; Lee, Yong Seuk; Kim, Byung Kak; Sim, Jae Ang; Lee, Beom Koo

    2016-06-01

    To analyze the contact mechanics of the femoral component and polyethylene of the Low Contact Stress rotating platform (LCS-RP) in nonweight bearing and weight bearing conditions using full flexion lateral radiographs. From May 2009 to December 2013, 58 knees in 41 patients diagnosed with osteoarthritis and treated with total knee arthroplasty (TKA) were included in this study. TKA was performed using an LCS-RP knee prosthesis. Full flexion lateral radiographs in both weight bearing and nonweight bearing condition were taken at least one month postoperatively (average, 28.8 months). Translation of femoral component was determined by the contact point between the femoral component and polyethylene. Maximum flexion was measured as the angle between the lines drawn at the midpoint of the femur and tibia. Posterior shift of the contact point in LCS-RP TKA was observed under weight bearing condition, which resulted in deeper flexion compared to LCS-RP TKA under nonweight bearing condition. In the LCS-RP TKA, the contact point between the femoral component and polyethylene moved posteriorly under weight bearing condition, and the joint was more congruent and maximum flexion increased with weight bearing.

  6. Validation and reliability of the sex estimation of the human os coxae using freely available DSP2 software for bioarchaeology and forensic anthropology.

    PubMed

    Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia

    2017-10-01

    A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.

  7. Bayesian feature selection for high-dimensional linear regression via the Ising approximation with applications to genomics.

    PubMed

    Fisher, Charles K; Mehta, Pankaj

    2015-06-01

    Feature selection, identifying a subset of variables that are relevant for predicting a response, is an important and challenging component of many methods in statistics and machine learning. Feature selection is especially difficult and computationally intensive when the number of variables approaches or exceeds the number of samples, as is often the case for many genomic datasets. Here, we introduce a new approach--the Bayesian Ising Approximation (BIA)-to rapidly calculate posterior probabilities for feature relevance in L2 penalized linear regression. In the regime where the regression problem is strongly regularized by the prior, we show that computing the marginal posterior probabilities for features is equivalent to computing the magnetizations of an Ising model with weak couplings. Using a mean field approximation, we show it is possible to rapidly compute the feature selection path described by the posterior probabilities as a function of the L2 penalty. We present simulations and analytical results illustrating the accuracy of the BIA on some simple regression problems. Finally, we demonstrate the applicability of the BIA to high-dimensional regression by analyzing a gene expression dataset with nearly 30 000 features. These results also highlight the impact of correlations between features on Bayesian feature selection. An implementation of the BIA in C++, along with data for reproducing our gene expression analyses, are freely available at http://physics.bu.edu/∼pankajm/BIACode. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Three-dimensional finite-element analysis of functional stresses in different bone locations produced by implants placed in the maxillary posterior region of the sinus floor.

    PubMed

    Koca, Omer Lutfi; Eskitascioglu, Gurcan; Usumez, Aslihan

    2005-01-01

    Implants placed in the posterior maxilla have lower success rates compared to implants placed in other oral regions. Inadequate bone levels have been suggested as a reason for this differential success rate. The purpose of this study was to determine the amount and localization of functional stresses in implants and adjacent bone locations when the implants were placed in the posterior maxilla in proximity to the sinus using finite element analysis (FEA). A 3-dimensional finite element model of a maxillary posterior section of bone (Type 3) was used in this study. Different bony dimensions were generated to perform nonlinear calculations. A single-piece 4.1x10-mm screw-shaped dental implant system (ITI solid implant) was modeled and inserted into atrophic maxillary models with crestal bone heights of 4, 5, 7, 10, or 13 mm. In some models the implant penetrated the sinus floor. Cobalt-Chromium (Wiron 99) was used as the crown framework material placed onto the implant, and porcelain was used for occlusal surface of the crown. A total average occlusal force (vertical load) of 300 N was applied at the palatal cusp (150 N) and mesial fossa (150 N) of the crown. The implant and superstructure were simulated in finite element software (Pro/Engineer 2000i program). For the porcelain superstructure for bone levels, maximum von Mises stress values were observed on the mesial fossae and palatal cusp. For the bone structure, the maximum von Mises stress values were observed in the palatal cortical bone adjacent to the implant neck. There was no stress within the spongy bone. High stresses occurred within the implants for all bone levels. The maximum von Mises stresses in the implants were localized in the neck of implants for 4- and 5-mm bone levels, but for 7-, 10-, and 13-mm bone levels more even stresses occurred within the implants.

  9. Non-stationary hydrologic frequency analysis using B-spline quantile regression

    NASA Astrophysics Data System (ADS)

    Nasri, B.; Bouezmarni, T.; St-Hilaire, A.; Ouarda, T. B. M. J.

    2017-11-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic and water resources systems under the assumption of stationarity. However, with increasing evidence of climate change, it is possible that the assumption of stationarity, which is prerequisite for traditional frequency analysis and hence, the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extremes based on B-Spline quantile regression which allows to model data in the presence of non-stationarity and/or dependence on covariates with linear and non-linear dependence. A Markov Chain Monte Carlo (MCMC) algorithm was used to estimate quantiles and their posterior distributions. A coefficient of determination and Bayesian information criterion (BIC) for quantile regression are used in order to select the best model, i.e. for each quantile, we choose the degree and number of knots of the adequate B-spline quantile regression model. The method is applied to annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in the variable of interest and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for an annual maximum and minimum discharge with high annual non-exceedance probabilities.

  10. Surgical options for lumbosacral fusion: biomechanical stability, advantage, disadvantage and affecting factors in selecting options.

    PubMed

    Yoshihara, Hiroyuki

    2014-07-01

    Numerous surgical procedures and instrumentation techniques for lumbosacral fusion (LSF) have been developed. This is probably because of its high mechanical demand and unique anatomy. Surgical options include anterior column support (ACS) and posterior stabilization procedures. Biomechanical studies have been performed to verify the stability of those options. The options have their own advantage but also disadvantage aspects. This review article reports the surgical options for lumbosacral fusion, their biomechanical stability, advantages/disadvantages, and affecting factors in option selection. Review of literature. LSF has lots of options both for ACS and posterior stabilization procedures. Combination of posterior stabilization procedures is an option. Furthermore, combinations of ACS and posterior stabilization procedures are other options. It is difficult to make a recommendation or treatment algorithm of LSF from the current literature. However, it is important to know all aspects of the options and decision-making of surgical options for LSF needs to be tailored for each patient, considering factors such as biomechanical stress and osteoporosis.

  11. Poster error probability in the Mu-11 Sequential Ranging System

    NASA Technical Reports Server (NTRS)

    Coyle, C. W.

    1981-01-01

    An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.

  12. Vertebral artery ostium atherosclerotic plaque as a potential source of posterior circulation ischemic stroke: result from borgess medical center vertebral artery ostium stenting registry.

    PubMed

    Al-Ali, Firas; Barrow, Tom; Duan, Li; Jefferson, Anne; Louis, Susan; Luke, Kim; Major, Kevin; Smoker, Sandy; Walker, Sarah; Yacobozzi, Margaret

    2011-09-01

    Although atherosclerotic plaque in the carotid and coronary arteries is accepted as a cause of ischemia, vertebral artery ostium (VAO) atherosclerotic plaque is not widely recognized as a source of ischemic stroke. We seek to demonstrate its implication in some posterior circulation ischemia. This is a nonrandomized, prospective, single-center registry on consecutive patients presenting with posterior circulation ischemia who underwent VAO stenting for significant atherosclerotic stenosis. Diagnostic evaluation and imaging studies determined the likelihood of this lesion as the symptom source (highly likely, probable, or highly unlikely). Patients were divided into 4 groups in decreasing order of severity of clinical presentation (ischemic stroke, TIA then stroke, TIA, asymptomatic), which were compared with the morphological and hemodynamic characteristics of the VAO plaque. Clinical follow-up 1 year after stenting assessed symptom recurrence. One hundred fourteen patients underwent stenting of 127 lesions; 35% of the lesions were highly likely the source of symptoms, 53% were probable, and 12% were highly unlikely. Clinical presentation correlated directly with plaque irregularity and presence of clot at the VAO, as did bilateral lesions and presence of tandem lesions. Symptom recurrence at 1 year was 2%. Thirty-five percent of the lesions were highly likely the source of the symptoms. A direct relationship between some morphological/hemodynamic characteristics and the severity of clinical presentation was also found. Finally, patients had a very low rate of symptom recurrence after treatment. These 3 observations point strongly to VAO plaque as a potential source of some posterior circulation stroke.

  13. A probability space for quantum models

    NASA Astrophysics Data System (ADS)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  14. URGORRI COMPLANATUS GEN. ET SP. NOV. (CRYPTOPHYCEAE), A RED-TIDE-FORMING SPECIES IN BRACKISH WATERS(1).

    PubMed

    Laza-Martínez, Aitor

    2012-04-01

    The morphology, ultrastructure, phylogeny, and ecology of a new red-tide-forming cryptomonad, Urgorri complanatus Laza-Martínez gen. et sp. nov., is described. U. complanatus has been collected in southwestern European estuaries, blooming in the inner reaches of several of them. The estuarine character of the species is also supported by its in vitro salinity preferences, showing a maximum growth rate at 10 psu. U. complanatus is a distinctive species and can be easily distinguished by LM from other known brackish and marine species. Cells are dorsoventrally flattened. The plastid has two anterior lobes. One pyrenoid is located in each of the lobes, and a third one on the posterior part. Thylakoids are arranged in pairs and do not penetrate pyrenoids. The plastid is reddish due to the presence of the phycoerythrin Cr-PE545. An orange discoidal eyespot lies beneath the nucleus, in the posterior ventral face of the plastid. A long furrow runs from the vestibulum, and a gullet is lacking. The periplast is composed of an inner sheet. The nuclear 18S rDNA based molecular analysis reveals U. complanatus is not related to any of the main cryptomonad lineages. Based on ultrastructural and pigment data, the most probable relatives are those merged under the family Geminigeraceae. Its lack of derived characters, together with the presence of characters proposed in previous studies to be primitive, suggests Urgorri could be considered representative of the cryptophycean ancestral character state. © 2012 Phycological Society of America.

  15. Chronological changes in functional cup position at 10 years after total hip arthroplasty.

    PubMed

    Okanoue, Yusuke; Ikeuchi, Masahiko; Takaya, Shogo; Izumi, Masashi; Aso, Koji; Kawakami, Teruhiko

    2017-09-19

    This study aims to clarify the chronological changes in functional cup position at a minimum follow-up of 10 years after total hip arthroplasty (THA), and to identify the risk factors influencing a significant difference in functional cup position during the postoperative follow-up period. We evaluated the chronological changes in functional cup position at a minimum follow-up of 10 years after THA in 58 patients with unilateral hip osteoarthritis. Radiographic cup position was measured on anteroposterior pelvic radiographs with the patient in the supine position, whereas functional cup position was recorded in the standing position. Radiographs were obtained before, 3 weeks after, and every 1 year after surgery. Functional cup anteversion (F-Ant) increased over time, and was found to have significantly increased at final follow-up compared to that at 3 weeks after surgery (p<0.01). The maximum postoperative change in F-Ant was 17.0° anteriorly; 12 cases (21%) showed a postoperative change in F-Ant by >10° anteriorly. Preoperative posterior pelvic tilt in the standing position and vertebral fractures after THA were significant predictors of increasing functional cup anteversion. Although chronological changes in functional cup position do occur after THA, their magnitude is relatively low. However, posterior impingement is likely to occur, which may cause edge loading, wear of the polyethylene liner, and anterior dislocation of the hip. We believe that, for the combined anteversion technique, the safe zone should probably be 5°-10° narrower in patients predicted to show considerable changes in functional cup position compared with standard cases.

  16. Development of a program for toric intraocular lens calculation considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and effective lens position.

    PubMed

    Eom, Youngsub; Ryu, Dongok; Kim, Dae Wook; Yang, Seul Ki; Song, Jong Suk; Kim, Sug-Whan; Kim, Hyo Myung

    2016-10-01

    To evaluate the toric intraocular lens (IOL) calculation considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and effective lens position (ELP). Two thousand samples of corneal parameters with keratometric astigmatism ≥ 1.0 D were obtained using bootstrap methods. The probability distributions for incision-induced keratometric and posterior corneal astigmatisms, as well as ELP were estimated from the literature review. The predicted residual astigmatism error using method D with an IOL add power calculator (IAPC) was compared with those derived using methods A, B, and C through Monte-Carlo simulation. Method A considered the keratometric astigmatism and incision-induced keratometric astigmatism, method B considered posterior corneal astigmatism in addition to the A method, method C considered incision-induced posterior corneal astigmatism in addition to the B method, and method D considered ELP in addition to the C method. To verify the IAPC used in this study, the predicted toric IOL cylinder power and its axis using the IAPC were compared with ray-tracing simulation results. The median magnitude of the predicted residual astigmatism error using method D (0.25 diopters [D]) was smaller than that derived using methods A (0.42 D), B (0.38 D), and C (0.28 D) respectively. Linear regression analysis indicated that the predicted toric IOL cylinder power and its axis had excellent goodness-of-fit between the IAPC and ray-tracing simulation. The IAPC is a simple but accurate method for predicting the toric IOL cylinder power and its axis considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and ELP.

  17. Role of posterior-anterior vertebral mobilization versus thermotherapy in non specific lower back pain.

    PubMed

    Baig, Aftab Ahmed Mirza; Ahmed, Syed Imran; Ali, Syed Shahzad; Rahmani, Asim; Siddiqui, Faizan

    2018-01-01

    Low back pain (LBP) is the foremost cause to hamper an individual's functional activities in Pakistan. Its impact on the quality of life and work routine makes it a major reason for therapeutic consultations. About 90% of the cases with LBP are non-specific. Various options are available for the treatment of LBP. Posterior-anterior vertebral mobilization, a manual therapy technique; and thermotherapy are used in clinical practice, however evidence to gauge their relative efficacy is yet to be synthesised. This study aimed to compare the effectiveness of posterior-anterior vertebral mobilization versus thermotherapy in the management of non-specific low back pain along with general stretching exercises. A randomised controlled trial with two-group pretest-posttest design was conducted at IPM&R, Dow University of Health Sciences (DUHS). A total of 60 Non-specific low back pain (NSLBP) patients with ages from 18 to 35 years were inducted through non-probability and purposive sampling technique. Baseline screening was done using an assessment form (Appendix-I). Subjects were allocated into two groups through systematic random sampling. Group-A (experimental group) received posterior-anterior vertebral mobilization with general stretching exercises while group B (control group) received thermotherapy with general stretching exercises. Pain and functional disability were assessed using NPRS and RMDQ respectively. Pre & post treatment scores were documented. A maximum drop-out rate of 20% was assumed. Recorded data were entered into SPSS V-19. Frequency and percentages were calculated for categorical variables. Intragroup and intergroup analyses were done using Wilcoxon signed ranked test and Mann-Whitney Test respectively. A P-value of 0.05 was considered statistically significant. Pre and post treatment analysis revealed that P-values for both pain and disability were less than 0.05, suggesting significant difference in NPRS and RMDQ scores. Whereas, median scores for both pain and disability were decreased by 75% in experimental group and 50% in control group. For inter group analysis p-values for both pain and disability were found to be less than 0.05. Both physiotherapeutic interventions, the PAVMs and thermotherapy, have significant effects on NSLBP in terms of relieving pain and improving functional disability. However PAVMs appeared to be more effective than thermotherapy.

  18. Deep convolutional networks for automated detection of posterior-element fractures on spine CT

    NASA Astrophysics Data System (ADS)

    Roth, Holger R.; Wang, Yinong; Yao, Jianhua; Lu, Le; Burns, Joseph E.; Summers, Ronald M.

    2016-03-01

    Injuries of the spine, and its posterior elements in particular, are a common occurrence in trauma patients, with potentially devastating consequences. Computer-aided detection (CADe) could assist in the detection and classification of spine fractures. Furthermore, CAD could help assess the stability and chronicity of fractures, as well as facilitate research into optimization of treatment paradigms. In this work, we apply deep convolutional networks (ConvNets) for the automated detection of posterior element fractures of the spine. First, the vertebra bodies of the spine with its posterior elements are segmented in spine CT using multi-atlas label fusion. Then, edge maps of the posterior elements are computed. These edge maps serve as candidate regions for predicting a set of probabilities for fractures along the image edges using ConvNets in a 2.5D fashion (three orthogonal patches in axial, coronal and sagittal planes). We explore three different methods for training the ConvNet using 2.5D patches along the edge maps of `positive', i.e. fractured posterior-elements and `negative', i.e. non-fractured elements. An experienced radiologist retrospectively marked the location of 55 displaced posterior-element fractures in 18 trauma patients. We randomly split the data into training and testing cases. In testing, we achieve an area-under-the-curve of 0.857. This corresponds to 71% or 81% sensitivities at 5 or 10 false-positives per patient, respectively. Analysis of our set of trauma patients demonstrates the feasibility of detecting posterior-element fractures in spine CT images using computer vision techniques such as deep convolutional networks.

  19. Sexual dimorphism of the tibia in contemporary Greek-Cypriots and Cretans: Forensic applications.

    PubMed

    Kranioti, E K; García-Donas, J G; Almeida Prado, P S; Kyriakou, X P; Langstaff, H C

    2017-02-01

    Sex estimation is an essential step in the identification process of unknown heavily decomposed human remains as it eliminates all possible matches of the opposite sex from the missing person's database. Osteometric methods constitute a reliable approach for sex estimation and considering the variation of sexual dimorphism between and within populations; standards for specific populations are required to ensure accurate results. The current study aspires to contribute osteometric data on the tibia from contemporary Greek-Cypriots to assist the identification process. A secondary goal involves osteometric comparison with data from Crete, a Greek island with similar cultural and dietary customs and environmental conditions. Left tibiae from one hundred and thirty-two skeletons (70 males and 62 females) of Greek-Cypriots and one hundred and fifty-seven skeletons (85 males, 72 females) of Cretans were measured. Seven standard metric variables including Maximum length (ML), Upper epiphyseal breadth (UB), Nutrient foramen anteroposterior diameter (NFap), Nutrient Foramen transverse diameter (NFtrsv), Nutrient foramen circumference (NFCirc), Minimum circumference (MinCirc) and Lower epiphyseal breadth (LB) were compared between sexes and populations. Univariate and multivariate discriminant functions were developed and posterior probabilities were calculated for each sample. Results confirmed the existence of sexual dimorphism of the tibia in both samples as well as the pooled sample. Classification accuracy for univariate functions ranged from 78% to 85% for Greek-Cypriots and from 69% to 83% for Cretans. The best multivariate equations after cross-validation resulted in 87% for Greek-Cypriots and 90% accuracy for Cretans. When the samples were pooled accuracy reached 87% with over 95% confidence for about one third of the population. Estimates with over 95% of posterior probability can be considered reliable while any less than 80% should be treated with caution. This work constitutes the initial step towards the creation of an osteometric database for Greek-Cypriots and we hope it can contribute to the biological profiling and identification of the missing and to potential forensic cases of unknown skeletal remains both in Cyprus and Crete. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Bayesian seismic inversion based on rock-physics prior modeling for the joint estimation of acoustic impedance, porosity and lithofacies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio

    We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less

  1. Identification of treatment responders based on multiple longitudinal outcomes with applications to multiple sclerosis patients.

    PubMed

    Kondo, Yumi; Zhao, Yinshan; Petkau, John

    2017-05-30

    Identification of treatment responders is a challenge in comparative studies where treatment efficacy is measured by multiple longitudinally collected continuous and count outcomes. Existing procedures often identify responders on the basis of only a single outcome. We propose a novel multiple longitudinal outcome mixture model that assumes that, conditionally on a cluster label, each longitudinal outcome is from a generalized linear mixed effect model. We utilize a Monte Carlo expectation-maximization algorithm to obtain the maximum likelihood estimates of our high-dimensional model and classify patients according to their estimated posterior probability of being a responder. We demonstrate the flexibility of our novel procedure on two multiple sclerosis clinical trial datasets with distinct data structures. Our simulation study shows that incorporating multiple outcomes improves the responder identification performance; this can occur even if some of the outcomes are ineffective. Our general procedure facilitates the identification of responders who are comprehensively defined by multiple outcomes from various distributions. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. An object correlation and maneuver detection approach for space surveillance

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Hu, Wei-Dong; Xin, Qin; Du, Xiao-Yong

    2012-10-01

    Object correlation and maneuver detection are persistent problems in space surveillance and maintenance of a space object catalog. We integrate these two problems into one interrelated problem, and consider them simultaneously under a scenario where space objects only perform a single in-track orbital maneuver during the time intervals between observations. We mathematically formulate this integrated scenario as a maximum a posteriori (MAP) estimation. In this work, we propose a novel approach to solve the MAP estimation. More precisely, the corresponding posterior probability of an orbital maneuver and a joint association event can be approximated by the Joint Probabilistic Data Association (JPDA) algorithm. Subsequently, the maneuvering parameters are estimated by optimally solving the constrained non-linear least squares iterative process based on the second-order cone programming (SOCP) algorithm. The desired solution is derived according to the MAP criterions. The performance and advantages of the proposed approach have been shown by both theoretical analysis and simulation results. We hope that our work will stimulate future work on space surveillance and maintenance of a space object catalog.

  3. Comparison of cosmology and seabed acoustics measurements using statistical inference from maximum entropy

    NASA Astrophysics Data System (ADS)

    Knobles, David; Stotts, Steven; Sagers, Jason

    2012-03-01

    Why can one obtain from similar measurements a greater amount of information about cosmological parameters than seabed parameters in ocean waveguides? The cosmological measurements are in the form of a power spectrum constructed from spatial correlations of temperature fluctuations within the microwave background radiation. The seabed acoustic measurements are in the form of spatial correlations along the length of a spatial aperture. This study explores the above question from the perspective of posterior probability distributions obtained from maximizing a relative entropy functional. An answer is in part that the seabed in shallow ocean environments generally has large temporal and spatial inhomogeneities, whereas the early universe was a nearly homogeneous cosmological soup with small but important fluctuations. Acoustic propagation models used in shallow water acoustics generally do not capture spatial and temporal variability sufficiently well, which leads to model error dominating the statistical inference problem. This is not the case in cosmology. Further, the physics of the acoustic modes in cosmology is that of a standing wave with simple initial conditions, whereas for underwater acoustics it is a traveling wave in a strongly inhomogeneous bounded medium.

  4. A Dirichlet-Multinomial Bayes Classifier for Disease Diagnosis with Microbial Compositions.

    PubMed

    Gao, Xiang; Lin, Huaiying; Dong, Qunfeng

    2017-01-01

    Dysbiosis of microbial communities is associated with various human diseases, raising the possibility of using microbial compositions as biomarkers for disease diagnosis. We have developed a Bayes classifier by modeling microbial compositions with Dirichlet-multinomial distributions, which are widely used to model multicategorical count data with extra variation. The parameters of the Dirichlet-multinomial distributions are estimated from training microbiome data sets based on maximum likelihood. The posterior probability of a microbiome sample belonging to a disease or healthy category is calculated based on Bayes' theorem, using the likelihood values computed from the estimated Dirichlet-multinomial distribution, as well as a prior probability estimated from the training microbiome data set or previously published information on disease prevalence. When tested on real-world microbiome data sets, our method, called DMBC (for Dirichlet-multinomial Bayes classifier), shows better classification accuracy than the only existing Bayesian microbiome classifier based on a Dirichlet-multinomial mixture model and the popular random forest method. The advantage of DMBC is its built-in automatic feature selection, capable of identifying a subset of microbial taxa with the best classification accuracy between different classes of samples based on cross-validation. This unique ability enables DMBC to maintain and even improve its accuracy at modeling species-level taxa. The R package for DMBC is freely available at https://github.com/qunfengdong/DMBC. IMPORTANCE By incorporating prior information on disease prevalence, Bayes classifiers have the potential to estimate disease probability better than other common machine-learning methods. Thus, it is important to develop Bayes classifiers specifically tailored for microbiome data. Our method shows higher classification accuracy than the only existing Bayesian classifier and the popular random forest method, and thus provides an alternative option for using microbial compositions for disease diagnosis.

  5. Relationship between screw sagittal angle and stress on endplate of adjacent segments after anterior cervical corpectomy and fusion with internal fixation: a Chinese finite element study.

    PubMed

    Zhang, Yu; Tang, Yibo; Shen, Hongxing

    2017-12-01

    In order to reduce the incidence of adjacent segment disease (ASD), the current study was designed to establish Chinese finite element models of normal 3rd~7th cervical vertebrae (C3-C7) and anterior cervical corpectomy and fusion (ACCF) with internal fixation , and analyze the influence of screw sagittal angle (SSA) on stress on endplate of adjacent cervical segments. Mimics 8.1 and Abaqus/CAE 6.10 softwares were adopted to establish finite element models. For C4 superior endplate and C6 inferior endplate, their anterior areas had the maximum stress in anteflexion position, and their posterior areas had the maximum stress in posterior extension position. As SSA increased, the stress reduced. With an increase of 10° in SSA, the stress on anterior areas of C4 superior endplate and C6 inferior endplate reduced by 12.67% and 7.99% in anteflexion position, respectively. With an increase of 10° in SSA, the stress on posterior areas of C4 superior endplate and C6 inferior endplate reduced by 9.68% and 10.22% in posterior extension position, respectively. The current study established Chinese finite element models of normal C3-C7 and ACCF with internal fixation , and demonstrated that as SSA increased, the stress on endplate of adjacent cervical segments decreased. In clinical surgery, increased SSA is able to play important role in protecting the adjacent cervical segments and reducing the incidence of ASD.

  6. Foramen arcuale: a rare morphological variation located in atlas vertebrae.

    PubMed

    Cirpan, Sibel; Yonguc, Goksin Nilufer; Edizer, Mete; Mas, Nuket Gocmen; Magden, A Orhan

    2017-08-01

    To investigate the incidence of foramen arcuale in dry atlas vertebrae which may cause clinical problems. Eighty-one dry human cervical vertebrae were examined. The evaluated parameters of two atlas vertebrae including foramen arcuale were as follows: maximum antero-posterior, transverse diameters and areas of the right and left superior articular facets and transverse foramina; maximum antero-posterior diameters, heights, areas and central sagittal thickness of bony arch forming roof of foramen arcuale, respectively. All parameters were measured with caliper in milimeters. Thirteen of eighty-one cervical vertebrae specimens (13/81, 16.05%) were atlas and the two of thirteen atlas vertebrae (2/13, 15.38%) had macroscopically complete foramen arcuale. Each of the two atlas vertebrae was including one foramen arcuale (one on the left and one on the right side). There was a statistically significant difference (p = 0.04) between the mean antero-posterior diameter of superior articular facet located on each side of atlas vertebrae, whereas not (p = 0.51) between mean antero-posterior diameter of transverse foramina. There was not any significant difference between the mean transverse diameters and areas of superior articular facets and transverse foramina located on each side of atlas vertebrae, respectively. Each of the areas of transverse foramina located on the same sides with foramen arcuale in two atlas vertebrae was less than the mean areas of transverse foramina located ipsilateral side with each foramen arcuale in thirteen atlas vertebrae. The present study provides additional information about the incidence and topography of the atlas vertebrae including foramen arcuale.

  7. Why does Japan use the probability method to set design flood?

    NASA Astrophysics Data System (ADS)

    Nakamura, S.; Oki, T.

    2015-12-01

    Design flood is hypothetical flood to make flood prevention plan. In Japan, a probability method based on precipitation data is used to define the scale of design flood: Tone River, the biggest river in Japan, is 1 in 200 years, Shinano River is 1 in 150 years, and so on. It is one of important socio-hydrological issue how to set reasonable and acceptable design flood in a changing world. The method to set design flood vary among countries. Although the probability method is also used in Netherland, but the base data is water level or discharge data and the probability is 1 in 1250 years (in fresh water section). On the other side, USA and China apply the maximum flood method which set the design flood based on the historical or probable maximum flood. This cases can leads a question: "what is the reason why the method vary among countries?" or "why does Japan use the probability method?" The purpose of this study is to clarify the historical process which the probability method was developed in Japan based on the literature. In the late 19the century, the concept of "discharge" and modern river engineering were imported by Dutch engineers, and modern flood prevention plans were developed in Japan. In these plans, the design floods were set based on the historical maximum method. Although the historical maximum method had been used until World War 2, however, the method was changed to the probability method after the war because of limitations of historical maximum method under the specific socio-economic situations: (1) the budget limitation due to the war and the GHQ occupation, (2) the historical floods: Makurazaki typhoon in 1945, Kathleen typhoon in 1947, Ione typhoon in 1948, and so on, attacked Japan and broke the record of historical maximum discharge in main rivers and the flood disasters made the flood prevention projects difficult to complete. Then, Japanese hydrologists imported the hydrological probability statistics from the West to take account of socio-economic situation in design flood, and they applied to Japanese rivers in 1958. The probability method was applied Japan to adapt the specific socio-economic and natural situation during the confusion after the war.

  8. Ancestral sequence reconstruction in primate mitochondrial DNA: compositional bias and effect on functional inference.

    PubMed

    Krishnan, Neeraja M; Seligmann, Hervé; Stewart, Caro-Beth; De Koning, A P Jason; Pollock, David D

    2004-10-01

    Reconstruction of ancestral DNA and amino acid sequences is an important means of inferring information about past evolutionary events. Such reconstructions suggest changes in molecular function and evolutionary processes over the course of evolution and are used to infer adaptation and convergence. Maximum likelihood (ML) is generally thought to provide relatively accurate reconstructed sequences compared to parsimony, but both methods lead to the inference of multiple directional changes in nucleotide frequencies in primate mitochondrial DNA (mtDNA). To better understand this surprising result, as well as to better understand how parsimony and ML differ, we constructed a series of computationally simple "conditional pathway" methods that differed in the number of substitutions allowed per site along each branch, and we also evaluated the entire Bayesian posterior frequency distribution of reconstructed ancestral states. We analyzed primate mitochondrial cytochrome b (Cyt-b) and cytochrome oxidase subunit I (COI) genes and found that ML reconstructs ancestral frequencies that are often more different from tip sequences than are parsimony reconstructions. In contrast, frequency reconstructions based on the posterior ensemble more closely resemble extant nucleotide frequencies. Simulations indicate that these differences in ancestral sequence inference are probably due to deterministic bias caused by high uncertainty in the optimization-based ancestral reconstruction methods (parsimony, ML, Bayesian maximum a posteriori). In contrast, ancestral nucleotide frequencies based on an average of the Bayesian set of credible ancestral sequences are much less biased. The methods involving simpler conditional pathway calculations have slightly reduced likelihood values compared to full likelihood calculations, but they can provide fairly unbiased nucleotide reconstructions and may be useful in more complex phylogenetic analyses than considered here due to their speed and flexibility. To determine whether biased reconstructions using optimization methods might affect inferences of functional properties, ancestral primate mitochondrial tRNA sequences were inferred and helix-forming propensities for conserved pairs were evaluated in silico. For ambiguously reconstructed nucleotides at sites with high base composition variability, ancestral tRNA sequences from Bayesian analyses were more compatible with canonical base pairing than were those inferred by other methods. Thus, nucleotide bias in reconstructed sequences apparently can lead to serious bias and inaccuracies in functional predictions.

  9. Photographic mark-recapture analysis of local dynamics within an open population of dolphins.

    PubMed

    Fearnbach, H; Durban, J; Parsons, K; Claridge, D

    2012-07-01

    Identifying demographic changes is important for understanding population dynamics. However, this requires long-term studies of definable populations of distinct individuals, which can be particularly challenging when studying mobile cetaceans in the marine environment. We collected photo-identification data from 19 years (1992-2010) to assess the dynamics of a population of bottlenose dolphins (Tursiops truncatus) restricted to the shallow (<7 m) waters of Little Bahama Bank, northern Bahamas. This population was known to range beyond our study area, so we adopted a Bayesian mixture modeling approach to mark-recapture to identify clusters of individuals that used the area to different extents, and we specifically estimated trends in survival, recruitment, and abundance of a "resident" population with high probabilities of identification. There was a high probability (p= 0.97) of a long-term decrease in the size of this resident population from a maximum of 47 dolphins (95% highest posterior density intervals, HPDI = 29-61) in 1996 to a minimum of just 24 dolphins (95% HPDI = 14-37) in 2009, a decline of 49% (95% HPDI = approximately 5% to approximately 75%). This was driven by low per capita recruitment (average approximately 0.02) that could not compensate for relatively low apparent survival rates (average approximately 0.94). Notably, there was a significant increase in apparent mortality (approximately 5 apparent mortalities vs. approximately 2 on average) in 1999 when two intense hurricanes passed over the study area, with a high probability (p = 0.83) of a drop below the average survival probability (approximately 0.91 in 1999; approximately 0.94, on average). As such, our mark-recapture approach enabled us to make useful inference about local dynamics within an open population of bottlenose dolphins; this should be applicable to other studies challenged by sampling highly mobile individuals with heterogeneous space use.

  10. The effect of business improvement districts on the incidence of violent crimes

    PubMed Central

    Golinelli, Daniela; Stokes, Robert J; Bluthenthal, Ricky

    2010-01-01

    Objective To examine whether business improvement districts (BID) contributed to greater than expected declines in the incidence of violent crimes in affected neighbourhoods. Method A Bayesian hierarchical model was used to assess the changes in the incidence of violent crimes between 1994 and 2005 and the implementation of 30 BID in Los Angeles neighbourhoods. Results The implementation of BID was associated with a 12% reduction in the incidence of robbery (95% posterior probability interval −2 to 24) and an 8% reduction in the total incidence of violent crimes (95% posterior probability interval −5 to 21). The strength of the effect of BID on robbery crimes varied by location. Conclusion These findings indicate that the implementation of BID can reduce the incidence of violent crimes likely to result in injury to individuals. The findings also indicate that the establishment of a BID by itself is not a panacea, and highlight the importance of targeting BID efforts to crime prevention interventions that reduce violence exposure associated with criminal behaviours. PMID:20587814

  11. The effect of business improvement districts on the incidence of violent crimes.

    PubMed

    MacDonald, John; Golinelli, Daniela; Stokes, Robert J; Bluthenthal, Ricky

    2010-10-01

    To examine whether business improvement districts (BID) contributed to greater than expected declines in the incidence of violent crimes in affected neighbourhoods. A Bayesian hierarchical model was used to assess the changes in the incidence of violent crimes between 1994 and 2005 and the implementation of 30 BID in Los Angeles neighbourhoods. The implementation of BID was associated with a 12% reduction in the incidence of robbery (95% posterior probability interval -2 to 24) and an 8% reduction in the total incidence of violent crimes (95% posterior probability interval -5 to 21). The strength of the effect of BID on robbery crimes varied by location. These findings indicate that the implementation of BID can reduce the incidence of violent crimes likely to result in injury to individuals. The findings also indicate that the establishment of a BID by itself is not a panacea, and highlight the importance of targeting BID efforts to crime prevention interventions that reduce violence exposure associated with criminal behaviours.

  12. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  13. Stationary properties of maximum-entropy random walks.

    PubMed

    Dixit, Purushottam D

    2015-10-01

    Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.

  14. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review

    PubMed Central

    McClelland, James L.

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered. PMID:23970868

  15. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review.

    PubMed

    McClelland, James L

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.

  16. Application of the quantum spin glass theory to image restoration.

    PubMed

    Inoue, J I

    2001-04-01

    Quantum fluctuation is introduced into the Markov random-field model for image restoration in the context of a Bayesian approach. We investigate the dependence of the quantum fluctuation on the quality of a black and white image restoration by making use of statistical mechanics. We find that the maximum posterior marginal (MPM) estimate based on the quantum fluctuation gives a fine restoration in comparison with the maximum a posteriori estimate or the thermal fluctuation based MPM estimate.

  17. Progression of Brain Network Alterations in Cerebral Amyloid Angiopathy.

    PubMed

    Reijmer, Yael D; Fotiadis, Panagiotis; Riley, Grace A; Xiong, Li; Charidimou, Andreas; Boulouis, Gregoire; Ayres, Alison M; Schwab, Kristin; Rosand, Jonathan; Gurol, M Edip; Viswanathan, Anand; Greenberg, Steven M

    2016-10-01

    We recently showed that cerebral amyloid angiopathy (CAA) is associated with functionally relevant brain network impairments, in particular affecting posterior white matter connections. Here we examined how these brain network impairments progress over time. Thirty-three patients with probable CAA underwent multimodal brain magnetic resonance imaging at 2 time points (mean follow-up time: 1.3±0.4 years). Brain networks of the hemisphere free of intracerebral hemorrhages were reconstructed using fiber tractography and graph theory. The global efficiency of the network and mean fractional anisotropies of posterior-posterior, frontal-frontal, and posterior-frontal network connections were calculated. Patients with moderate versus severe CAA were defined based on microbleed count, dichotomized at the median (median=35). Global efficiency of the intracerebral hemorrhage-free hemispheric network declined from baseline to follow-up (-0.008±0.003; P=0.029). The decline in global efficiency was most pronounced for patients with severe CAA (group×time interaction P=0.03). The decline in global network efficiency was associated with worse executive functioning (β=0.46; P=0.03). Examination of subgroups of network connections revealed a decline in fractional anisotropies of posterior-posterior connections at both levels of CAA severity (-0.006±0.002; P=0.017; group×time interaction P=0.16). The fractional anisotropies of posterior-frontal and frontal-frontal connections declined in patients with severe but not moderate CAA (group×time interaction P=0.007 and P=0.005). Associations were independent of change in white matter hyperintensity volume. Brain network impairment in patients with CAA worsens measurably over just 1.3-year follow-up and seem to progress from posterior to frontal connections with increasing disease severity. © 2016 American Heart Association, Inc.

  18. Differential alpha coherence hemispheric patterns in men and women during pleasant and unpleasant musical emotions.

    PubMed

    Flores-Gutiérrez, Enrique O; Díaz, José-Luis; Barrios, Fernando A; Guevara, Miguel Angel; Del Río-Portilla, Yolanda; Corsi-Cabrera, María; Del Flores-Gutiérrez, Enrique O

    2009-01-01

    Potential sex differences in EEG coherent activity during pleasant and unpleasant musical emotions were investigated. Musical excerpts by Mahler, Bach, and Prodromidès were played to seven men and seven women and their subjective emotions were evaluated in relation to alpha band intracortical coherence. Different brain links in specific frequencies were associated to pleasant and unpleasant emotions. Pleasant emotions (Mahler, Bach) increased upper alpha couplings linking left anterior and posterior regions. Unpleasant emotions (Prodromidès) were sustained by posterior midline coherence exclusively in the right hemisphere in men and bilateral in women. Combined music induced bilateral oscillations among posterior sensory and predominantly left association areas in women. Consistent with their greater positive attributions to music, the coherent network is larger in women, both for musical emotion and for unspecific musical effects. Musical emotion entails specific coupling among cortical regions and involves coherent upper alpha activity between posterior association areas and frontal regions probably mediating emotional and perceptual integration. Linked regions by combined music suggest more working memory contribution in women and attention in men.

  19. VizieR Online Data Catalog: Wide binaries in Tycho-Gaia: search method (Andrews+, 2017)

    NASA Astrophysics Data System (ADS)

    Andrews, J. J.; Chaname, J.; Agueros, M. A.

    2017-11-01

    Our catalogue of wide binaries identified in the Tycho-Gaia Astrometric Solution catalogue. The Gaia source IDs, Tycho IDs, astrometry, posterior probabilities for both the log-flat prior and power-law prior models, and angular separation are presented. (1 data file).

  20. 14 CFR 440.7 - Determination of maximum probable loss.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... determine the maximum probable loss (MPL) from covered claims by a third party for bodily injury or property... licensee, or permittee, if interagency consultation may delay issuance of the MPL determination. (c... after the MPL determination is issued. Any change in financial responsibility requirements as a result...

  1. The effect of deacetylated gellan gum on aesculin distribution in the posterior segment of the eye after topical administration.

    PubMed

    Chen, Qiuhong; Zheng, Yu; Li, Ye; Zeng, Ying; Kuang, Jianchao; Hou, Shixiang; Li, Xiaohui

    2012-05-01

    The aim of the present work was to evaluate the effect of deacetylated gellan gum on delivering hydrophilic drug to the posterior segment of the eye. An aesculin-containing in situ gel based on deacetylated gellan gum (AG) was prepared and characterized. In vitro corneal permeation across isolated rabbit cornea of aesculin between AG and aesculin solution (AS) was compared. The results showed that deacetylated gellan gum promotes corneal penetration of aesculin. Pharmacokinetics and ocular tissue distribution of aesculin after topical administration in rabbit eye showed that AG greatly improved aesculin accumulation in posterior segmentsrelative to AS, which was probably attributed to conjunctivital/sclera pathway. The area-under-the-curve (AUC) for AG in aqueous humor, choroid-retina, sclera and iris-ciliary body were significantly larger than those of AS. AG can be used as a potential carrier for broading the application of aesculin.

  2. CKS knee prosthesis: biomechanics and clinical results in 42 cases.

    PubMed

    Martucci, E; Verni, E; Del Prete, G; Stulberg, S D

    1996-01-01

    From 1991 to 1993 a total of 42 CKS prostheses were implanted for the following reasons: osteoarthrosis (34 cases), rheumatoid arthritis (7 cases) tibial necrosis (1 case). At follow-up obtained after 17 to 41 months the results were: excellent or good: 41; the only poor result was probably related to excessive tension of the posterior cruciate ligament. 94% of the patients reported complete regression of pain, 85% was capable of going up and down stairs without support. Mean joint flexion was 105 degrees. Radiologically the anatomical axis of the knee had a mean valgus of anatomical axis of the knee had a mean valgus of 6 degrees. The prosthetic components were always cemented. The posterior cruciate ligament was removed in 7 knees, so that the prosthesis with "posterior stability" was used. The patella was never prosthetized. One patient complained of peri-patellar pain two months after surgery which then regressed completely.

  3. Hierarchical Bayes approach for subgroup analysis.

    PubMed

    Hsu, Yu-Yi; Zalkikar, Jyoti; Tiwari, Ram C

    2017-01-01

    In clinical data analysis, both treatment effect estimation and consistency assessment are important for a better understanding of the drug efficacy for the benefit of subjects in individual subgroups. The linear mixed-effects model has been used for subgroup analysis to describe treatment differences among subgroups with great flexibility. The hierarchical Bayes approach has been applied to linear mixed-effects model to derive the posterior distributions of overall and subgroup treatment effects. In this article, we discuss the prior selection for variance components in hierarchical Bayes, estimation and decision making of the overall treatment effect, as well as consistency assessment of the treatment effects across the subgroups based on the posterior predictive p-value. Decision procedures are suggested using either the posterior probability or the Bayes factor. These decision procedures and their properties are illustrated using a simulated example with normally distributed response and repeated measurements.

  4. Characterization of Selenaion koniopes n. gen., n. sp., an amoeba that represents a new major lineage within heterolobosea, isolated from the Wieliczka salt mine.

    PubMed

    Park, Jong Soo; De Jonckheere, Johan F; Simpson, Alastair G B

    2012-01-01

    A new heterolobosean amoeba, Selenaion koniopes n. gen., n. sp., was isolated from 73‰ saline water in the Wieliczka salt mine, Poland. The amoeba had eruptive pseudopodia, a prominent uroid, and a nucleus without central nucleolus. Cysts had multiple crater-like pore plugs. No flagellates were observed. Transmission electron microscopy revealed several typical heterolobosean features: flattened mitochondrial cristae, mitochondria associated with endoplasmic reticulum, and an absence of obvious Golgi dictyosomes. Two types of larger and smaller granules were sometimes abundant in the cytoplasm--these may be involved in cyst formation. Mature cysts had a fibrous endocyst that could be thick, plus an ectocyst that was covered with small granules. Pore plugs had a flattened dome shape, were bipartite, and penetrated only the endocyst. Phylogenies based on the 18S rRNA gene and the presence of 18S rRNA helix 17_1 strongly confirmed assignment to Heterolobosea. The organism was not closely related to any described genus, and instead formed the deepest branch within the Heterolobosea clade after Pharyngomonas, with support for this deep-branching position being moderate (i.e. maximum likelihood bootstrap support--67%; posterior probability--0.98). Cells grew at 15-150‰ salinity. Thus, S. koniopes is a halotolerant, probably moderately halophilic heterolobosean, with a potentially pivotal evolutionary position within this large eukaryote group. © 2012 The Author(s) Journal of Eukaryotic Microbiology © 2012 International Society of Protistologists.

  5. Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.

    PubMed

    Dosso, Stan E; Nielsen, Peter L

    2002-01-01

    This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.

  6. Star Cluster Properties in Two LEGUS Galaxies Computed with Stochastic Stellar Population Synthesis Models

    NASA Astrophysics Data System (ADS)

    Krumholz, Mark R.; Adamo, Angela; Fumagalli, Michele; Wofford, Aida; Calzetti, Daniela; Lee, Janice C.; Whitmore, Bradley C.; Bright, Stacey N.; Grasha, Kathryn; Gouliermis, Dimitrios A.; Kim, Hwihyun; Nair, Preethi; Ryon, Jenna E.; Smith, Linda J.; Thilker, David; Ubeda, Leonardo; Zackrisson, Erik

    2015-10-01

    We investigate a novel Bayesian analysis method, based on the Stochastically Lighting Up Galaxies (slug) code, to derive the masses, ages, and extinctions of star clusters from integrated light photometry. Unlike many analysis methods, slug correctly accounts for incomplete initial mass function (IMF) sampling, and returns full posterior probability distributions rather than simply probability maxima. We apply our technique to 621 visually confirmed clusters in two nearby galaxies, NGC 628 and NGC 7793, that are part of the Legacy Extragalactic UV Survey (LEGUS). LEGUS provides Hubble Space Telescope photometry in the NUV, U, B, V, and I bands. We analyze the sensitivity of the derived cluster properties to choices of prior probability distribution, evolutionary tracks, IMF, metallicity, treatment of nebular emission, and extinction curve. We find that slug's results for individual clusters are insensitive to most of these choices, but that the posterior probability distributions we derive are often quite broad, and sometimes multi-peaked and quite sensitive to the choice of priors. In contrast, the properties of the cluster population as a whole are relatively robust against all of these choices. We also compare our results from slug to those derived with a conventional non-stochastic fitting code, Yggdrasil. We show that slug's stochastic models are generally a better fit to the observations than the deterministic ones used by Yggdrasil. However, the overall properties of the cluster populations recovered by both codes are qualitatively similar.

  7. Probability modeling of the number of positive cores in a prostate cancer biopsy session, with applications.

    PubMed

    Serfling, Robert; Ogola, Gerald

    2016-02-10

    Among men, prostate cancer (CaP) is the most common newly diagnosed cancer and the second leading cause of death from cancer. A major issue of very large scale is avoiding both over-treatment and under-treatment of CaP cases. The central challenge is deciding clinical significance or insignificance when the CaP biopsy results are positive but only marginally so. A related concern is deciding how to increase the number of biopsy cores for larger prostates. As a foundation for improved choice of number of cores and improved interpretation of biopsy results, we develop a probability model for the number of positive cores found in a biopsy, given the total number of cores, the volumes of the tumor nodules, and - very importantly - the prostate volume. Also, three applications are carried out: guidelines for the number of cores as a function of prostate volume, decision rules for insignificant versus significant CaP using number of positive cores, and, using prior distributions on total tumor size, Bayesian posterior probabilities for insignificant CaP and posterior median CaP. The model-based results have generality of application, take prostate volume into account, and provide attractive tradeoffs of specificity versus sensitivity. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Neural substrates of the impaired effort expenditure decision making in schizophrenia.

    PubMed

    Huang, Jia; Yang, Xin-Hua; Lan, Yong; Zhu, Cui-Ying; Liu, Xiao-Qun; Wang, Ye-Fei; Cheung, Eric F C; Xie, Guang-Rong; Chan, Raymond C K

    2016-09-01

    Unwillingness to expend more effort to pursue high value rewards has been associated with motivational anhedonia in schizophrenia (SCZ) and abnormal dopamine activity in the nucleus accumbens (NAcc). The authors hypothesized that dysfunction of the NAcc and the associated forebrain regions are involved in the impaired effort expenditure decision-making of SCZ. A 2 (reward magnitude: low vs. high) × 3 (probability: 20% vs. 50% vs. 80%) event-related fMRI design in the effort-expenditure for reward task (EEfRT) was used to examine the neural response of 23 SCZ patients and 23 demographically matched control participants when the participants made effort expenditure decisions to pursue uncertain rewards. SCZ patients were significantly less likely to expend high level of effort in the medium (50%) and high (80%) probability conditions than healthy controls. The neural response in the NAcc, the posterior cingulate gyrus and the left medial frontal gyrus in SCZ patients were weaker than healthy controls and did not linearly increase with an increase in reward magnitude and probability. Moreover, NAcc activity was positively correlated with the willingness to expend high-level effort and concrete consummatory pleasure experience. NAcc and posterior cingulate dysfunctions in SCZ patients may be involved in their impaired effort expenditure decision-making. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  10. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  11. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  12. The digestive system of the "stick bug" Cladomorphus phyllinus (Phasmida, Phasmatidae): a morphological, physiological and biochemical analysis.

    PubMed

    Monteiro, Emiliano C; Tamaki, Fábio K; Terra, Walter R; Ribeiro, Alberto F

    2014-03-01

    This work presents a detailed morphofunctional study of the digestive system of a phasmid representative, Cladomorphus phyllinus. Cells from anterior midgut exhibit a merocrine secretion, whereas posterior midgut cells show a microapocrine secretion. A complex system of midgut tubules is observed in the posterior midgut which is probably related to the luminal alkalization of this region. Amaranth dye injection into the haemolymph and orally feeding insects with dye indicated that the anterior midgut is water-absorbing, whereas the Malpighian tubules are the main site of water secretion. Thus, a putative counter-current flux of fluid from posterior to anterior midgut may propel enzyme digestive recycling, confirmed by the low rate of enzyme excretion. The foregut and anterior midgut present an acidic pH (5.3 and 5.6, respectively), whereas the posterior midgut is highly alkaline (9.1) which may be related to the digestion of hemicelluloses. Most amylase, trypsin and chymotrypsin activities occur in the foregut and anterior midgut. Maltase is found along the midgut associated with the microvillar glycocalix, while aminopeptidase occurs in the middle and posterior midgut in membrane bound forms. Both amylase and trypsin are secreted mainly by the anterior midgut through an exocytic process as revealed by immunocytochemical data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Spalled, aerodynamically modified moldavite from Slavice, Moravia, Czechoslovakia

    USGS Publications Warehouse

    Chao, E.C.T.

    1964-01-01

    A Czechoslovakian tektite or moldavite shows clear, indirect evidence of aerodynamic ablation. This large tektite has the shape of a teardrop, with a strongly convex, deeply corroded, but clearly identifiable front and a planoconvex, relatively smooth, posterior surface. In spite of much erosion and corrosion, demarcation of the posterior and the anterior part of the specimen (the keel) is clearly preserved locally. This specimen provides the first tangible evidence that moldavites entered the atmosphere cold, probably at a velocity exceeding 5 kilometers per second; the result was selective heating of the anterior face and perhaps ablation during the second melting. This provides evidence of the extraterrestial origin of moldavites.

  14. Some Simple Formulas for Posterior Convergence Rates

    PubMed Central

    2014-01-01

    We derive some simple relations that demonstrate how the posterior convergence rate is related to two driving factors: a “penalized divergence” of the prior, which measures the ability of the prior distribution to propose a nonnegligible set of working models to approximate the true model and a “norm complexity” of the prior, which measures the complexity of the prior support, weighted by the prior probability masses. These formulas are explicit and involve no essential assumptions and are easy to apply. We apply this approach to the case with model averaging and derive some useful oracle inequalities that can optimize the performance adaptively without knowing the true model. PMID:27379278

  15. A new prior for bayesian anomaly detection: application to biosurveillance.

    PubMed

    Shen, Y; Cooper, G F

    2010-01-01

    Bayesian anomaly detection computes posterior probabilities of anomalous events by combining prior beliefs and evidence from data. However, the specification of prior probabilities can be challenging. This paper describes a Bayesian prior in the context of disease outbreak detection. The goal is to provide a meaningful, easy-to-use prior that yields a posterior probability of an outbreak that performs at least as well as a standard frequentist approach. If this goal is achieved, the resulting posterior could be usefully incorporated into a decision analysis about how to act in light of a possible disease outbreak. This paper describes a Bayesian method for anomaly detection that combines learning from data with a semi-informative prior probability over patterns of anomalous events. A univariate version of the algorithm is presented here for ease of illustration of the essential ideas. The paper describes the algorithm in the context of disease-outbreak detection, but it is general and can be used in other anomaly detection applications. For this application, the semi-informative prior specifies that an increased count over baseline is expected for the variable being monitored, such as the number of respiratory chief complaints per day at a given emergency department. The semi-informative prior is derived based on the baseline prior, which is estimated from using historical data. The evaluation reported here used semi-synthetic data to evaluate the detection performance of the proposed Bayesian method and a control chart method, which is a standard frequentist algorithm that is closest to the Bayesian method in terms of the type of data it uses. The disease-outbreak detection performance of the Bayesian method was statistically significantly better than that of the control chart method when proper baseline periods were used to estimate the baseline behavior to avoid seasonal effects. When using longer baseline periods, the Bayesian method performed as well as the control chart method. The time complexity of the Bayesian algorithm is linear in the number of the observed events being monitored, due to a novel, closed-form derivation that is introduced in the paper. This paper introduces a novel prior probability for Bayesian outbreak detection that is expressive, easy-to-apply, computationally efficient, and performs as well or better than a standard frequentist method.

  16. Value of Weather Information in Cranberry Marketing Decisions.

    NASA Astrophysics Data System (ADS)

    Morzuch, Bernard J.; Willis, Cleve E.

    1982-04-01

    Econometric techniques are used to establish a functional relationship between cranberry yields and important precipitation, temperature, and sunshine variables. Crop forecasts are derived from the model and are used to establish posterior probabilities to be used in a Bayesian decision context pertaining to leasing space for the storage of the berries.

  17. Bayesian Estimation of the DINA Model with Gibbs Sampling

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2015-01-01

    A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…

  18. BAT - The Bayesian analysis toolkit

    NASA Astrophysics Data System (ADS)

    Caldwell, Allen; Kollár, Daniel; Kröninger, Kevin

    2009-11-01

    We describe the development of a new toolkit for data analysis. The analysis package is based on Bayes' Theorem, and is realized with the use of Markov Chain Monte Carlo. This gives access to the full posterior probability distribution. Parameter estimation, limit setting and uncertainty propagation are implemented in a straightforward manner.

  19. Bayesian Posterior Odds Ratios: Statistical Tools for Collaborative Evaluations

    ERIC Educational Resources Information Center

    Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon

    2018-01-01

    To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…

  20. Robust regression and posterior predictive simulation increase power to detect early bursts of trait evolution.

    PubMed

    Slater, Graham J; Pennell, Matthew W

    2014-05-01

    A central prediction of much theory on adaptive radiations is that traits should evolve rapidly during the early stages of a clade's history and subsequently slowdown in rate as niches become saturated--a so-called "Early Burst." Although a common pattern in the fossil record, evidence for early bursts of trait evolution in phylogenetic comparative data has been equivocal at best. We show here that this may not necessarily be due to the absence of this pattern in nature. Rather, commonly used methods to infer its presence perform poorly when when the strength of the burst--the rate at which phenotypic evolution declines--is small, and when some morphological convergence is present within the clade. We present two modifications to existing comparative methods that allow greater power to detect early bursts in simulated datasets. First, we develop posterior predictive simulation approaches and show that they outperform maximum likelihood approaches at identifying early bursts at moderate strength. Second, we use a robust regression procedure that allows for the identification and down-weighting of convergent taxa, leading to moderate increases in method performance. We demonstrate the utility and power of these approach by investigating the evolution of body size in cetaceans. Model fitting using maximum likelihood is equivocal with regards the mode of cetacean body size evolution. However, posterior predictive simulation combined with a robust node height test return low support for Brownian motion or rate shift models, but not the early burst model. While the jury is still out on whether early bursts are actually common in nature, our approach will hopefully facilitate more robust testing of this hypothesis. We advocate the adoption of similar posterior predictive approaches to improve the fit and to assess the adequacy of macroevolutionary models in general.

  1. Full-thickness tears of the supraspinatus tendon: A three-dimensional finite element analysis.

    PubMed

    Quental, C; Folgado, J; Monteiro, J; Sarmento, M

    2016-12-08

    Knowledge regarding the likelihood of propagation of supraspinatus tears is important to allow an early identification of patients for whom a conservative treatment is more likely to fail, and consequently, to improve their clinical outcome. The aim of this study was to investigate the potential for propagation of posterior, central, and anterior full-thickness tears of different sizes using the finite element method. A three-dimensional finite element model of the supraspinatus tendon was generated from the Visible Human Project data. The mechanical behaviour of the tendon was fitted from experimental data using a transversely isotropic hyperelastic constitutive model. The full-thickness tears were simulated at the supraspinatus tendon insertion by decreasing the interface area. Tear sizes from 10% to 90%, in 10% increments, of the anteroposterior length of the supraspinatus footprint were considered in the posterior, central, and anterior regions of the tendon. For each tear, three finite element analyses were performed for a supraspinatus force of 100N, 200N, and 400N. Considering a correlation between tendon strain and the risk of tear propagation, the simulated tears were compared qualitatively and quantitatively by evaluating the volume of tendon for which a maximum strain criterion was not satisfied. The finite element analyses showed a significant impact of tear size and location not only on the magnitude, but also on the patterns of the maximum principal strains. The mechanical outcome of the anterior full-thickness tears was consistently, and significantly, more severe than that of the central or posterior full-thickness tears, which suggests that the anterior tears are at greater risk of propagating than the central or posterior tears. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Two-speed phacoemulsification for soft cataracts using optimized parameters and procedure step toolbar with the CENTURION Vision System and Balanced Tip.

    PubMed

    Davison, James A

    2015-01-01

    To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer.

  3. Information-Based Analysis of Data Assimilation (Invited)

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.; Gupta, H. V.; Crow, W. T.; Gong, W.

    2013-12-01

    Data assimilation is defined as the Bayesian conditioning of uncertain model simulations on observations for the purpose of reducing uncertainty about model states. Practical data assimilation methods make the application of Bayes' law tractable either by employing assumptions about the prior, posterior and likelihood distributions (e.g., the Kalman family of filters) or by using resampling methods (e.g., bootstrap filter). We propose to quantify the efficiency of these approximations in an OSSE setting using information theory and, in an OSSE or real-world validation setting, to measure the amount - and more importantly, the quality - of information extracted from observations during data assimilation. To analyze DA assumptions, uncertainty is quantified as the Shannon-type entropy of a discretized probability distribution. The maximum amount of information that can be extracted from observations about model states is the mutual information between states and observations, which is equal to the reduction in entropy in our estimate of the state due to Bayesian filtering. The difference between this potential and the actual reduction in entropy due to Kalman (or other type of) filtering measures the inefficiency of the filter assumptions. Residual uncertainty in DA posterior state estimates can be attributed to three sources: (i) non-injectivity of the observation operator, (ii) noise in the observations, and (iii) filter approximations. The contribution of each of these sources is measurable in an OSSE setting. The amount of information extracted from observations by data assimilation (or system identification, including parameter estimation) can also be measured by Shannon's theory. Since practical filters are approximations of Bayes' law, it is important to know whether the information that is extracted form observations by a filter is reliable. We define information as either good or bad, and propose to measure these two types of information using partial Kullback-Leibler divergences. Defined this way, good and bad information sum to total information. This segregation of information into good and bad components requires a validation target distribution; in a DA OSSE setting, this can be the true Bayesian posterior, but in a real-world setting the validation target might be determined by a set of in situ observations.

  4. Source localization of small sharp spikes: low resolution electromagnetic tomography (LORETA) reveals two distinct cortical sources.

    PubMed

    Zumsteg, Dominik; Andrade, Danielle M; Wennberg, Richard A

    2006-06-01

    We have investigated the cortical sources and electroencephalographic (EEG) characteristics of small sharp spikes (SSS) by using statistical non-parametric mapping (SNPM) of low resolution electromagnetic tomography (LORETA). We analyzed 7 SSS patterns (501 individual SSS) in 6 patients who underwent sleep EEG studies with 29 or 23 scalp electrodes. The scalp signals were averaged time-locked to the SSS peak activity and subjected to SNPM of LORETA values. All 7 SSS patterns (mean 72 individual SSS, range 11-200) revealed a very similar and highly characteristic transhemispheric oblique scalp voltage distribution comprising a first negative field maximum over ipsilateral lateral temporal areas, followed by a second negative field maximum over the contralateral subtemporal region approximately 30 ms later. SNPM-LORETA consistently localized the first component into the ipsilateral posterior insular region, and the second component into ipsilateral posterior mesial temporo-occipital structures. SSS comprise an amalgam of two sequential, distinct cortical components, showing a very uniform and peculiar EEG pattern and cortical source solutions. As such, they must be clearly distinguished from interictal epileptiform discharges in patients with epilepsy. The awareness of these peculiar EEG characteristics may increase our ability to differentiate SSS from interictal epileptiform activity. The finding of a posterior insular source might serve as an inspiration for new physiological considerations regarding these enigmatic waveforms.

  5. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    NASA Astrophysics Data System (ADS)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  6. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites: SURROGATE-BASED MCMC FOR CLM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    2016-07-04

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  7. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    NASA Astrophysics Data System (ADS)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.

  8. Visual Agnosia and Posterior Cerebral Artery Infarcts: An Anatomical-Clinical Study

    PubMed Central

    Martinaud, Olivier; Pouliquen, Dorothée; Gérardin, Emmanuel; Loubeyre, Maud; Hirsbein, David; Hannequin, Didier; Cohen, Laurent

    2012-01-01

    Background To evaluate systematically the cognitive deficits following posterior cerebral artery (PCA) strokes, especially agnosic visual disorders, and to study anatomical-clinical correlations. Methods and Findings We investigated 31 patients at the chronic stage (mean duration of 29.1 months post infarct) with standardized cognitive tests. New experimental tests were used to assess visual impairments for words, faces, houses, and objects. Forty-one healthy subjects participated as controls. Brain lesions were normalized, combined, and related to occipitotemporal areas responsive to specific visual categories, including words (VWFA), faces (FFA and OFA), houses (PPA) and common objects (LOC). Lesions were located in the left hemisphere in 15 patients, in the right in 13, and bilaterally in 3. Visual field defects were found in 23 patients. Twenty patients had a visual disorder in at least one of the experimental tests (9 with faces, 10 with houses, 7 with phones, 3 with words). Six patients had a deficit just for a single category of stimulus. The regions of maximum overlap of brain lesions associated with a deficit for a given category of stimuli were contiguous to the peaks of the corresponding functional areas as identified in normal subjects. However, the strength of anatomical-clinical correlations was greater for words than for faces or houses, probably due to the stronger lateralization of the VWFA, as compared to the FFA or the PPA. Conclusions Agnosic visual disorders following PCA infarcts are more frequent than previously reported. Dedicated batteries of tests, such as those developed here, are required to identify such deficits, which may escape clinical notice. The spatial relationships of lesions and of regions activated in normal subjects predict the nature of the deficits, although individual variability and bilaterally represented systems may blur those correlations. PMID:22276198

  9. Visual agnosia and posterior cerebral artery infarcts: an anatomical-clinical study.

    PubMed

    Martinaud, Olivier; Pouliquen, Dorothée; Gérardin, Emmanuel; Loubeyre, Maud; Hirsbein, David; Hannequin, Didier; Cohen, Laurent

    2012-01-01

    To evaluate systematically the cognitive deficits following posterior cerebral artery (PCA) strokes, especially agnosic visual disorders, and to study anatomical-clinical correlations. We investigated 31 patients at the chronic stage (mean duration of 29.1 months post infarct) with standardized cognitive tests. New experimental tests were used to assess visual impairments for words, faces, houses, and objects. Forty-one healthy subjects participated as controls. Brain lesions were normalized, combined, and related to occipitotemporal areas responsive to specific visual categories, including words (VWFA), faces (FFA and OFA), houses (PPA) and common objects (LOC). Lesions were located in the left hemisphere in 15 patients, in the right in 13, and bilaterally in 3. Visual field defects were found in 23 patients. Twenty patients had a visual disorder in at least one of the experimental tests (9 with faces, 10 with houses, 7 with phones, 3 with words). Six patients had a deficit just for a single category of stimulus. The regions of maximum overlap of brain lesions associated with a deficit for a given category of stimuli were contiguous to the peaks of the corresponding functional areas as identified in normal subjects. However, the strength of anatomical-clinical correlations was greater for words than for faces or houses, probably due to the stronger lateralization of the VWFA, as compared to the FFA or the PPA. Agnosic visual disorders following PCA infarcts are more frequent than previously reported. Dedicated batteries of tests, such as those developed here, are required to identify such deficits, which may escape clinical notice. The spatial relationships of lesions and of regions activated in normal subjects predict the nature of the deficits, although individual variability and bilaterally represented systems may blur those correlations.

  10. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE PAGES

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...

    2016-06-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  11. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  12. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites

    NASA Astrophysics Data System (ADS)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura

    2016-07-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.

  13. A quantitative method for risk assessment of agriculture due to climate change

    NASA Astrophysics Data System (ADS)

    Dong, Zhiqiang; Pan, Zhihua; An, Pingli; Zhang, Jingting; Zhang, Jun; Pan, Yuying; Huang, Lei; Zhao, Hui; Han, Guolin; Wu, Dong; Wang, Jialin; Fan, Dongliang; Gao, Lin; Pan, Xuebiao

    2018-01-01

    Climate change has greatly affected agriculture. Agriculture is facing increasing risks as its sensitivity and vulnerability to climate change. Scientific assessment of climate change-induced agricultural risks could help to actively deal with climate change and ensure food security. However, quantitative assessment of risk is a difficult issue. Here, based on the IPCC assessment reports, a quantitative method for risk assessment of agriculture due to climate change is proposed. Risk is described as the product of the degree of loss and its probability of occurrence. The degree of loss can be expressed by the yield change amplitude. The probability of occurrence can be calculated by the new concept of climate change effect-accumulated frequency (CCEAF). Specific steps of this assessment method are suggested. This method is determined feasible and practical by using the spring wheat in Wuchuan County of Inner Mongolia as a test example. The results show that the fluctuation of spring wheat yield increased with the warming and drying climatic trend in Wuchuan County. The maximum yield decrease and its probability were 3.5 and 64.6%, respectively, for the temperature maximum increase 88.3%, and its risk was 2.2%. The maximum yield decrease and its probability were 14.1 and 56.1%, respectively, for the precipitation maximum decrease 35.2%, and its risk was 7.9%. For the comprehensive impacts of temperature and precipitation, the maximum yield decrease and its probability were 17.6 and 53.4%, respectively, and its risk increased to 9.4%. If we do not adopt appropriate adaptation strategies, the degree of loss from the negative impacts of multiclimatic factors and its probability of occurrence will both increase accordingly, and the risk will also grow obviously.

  14. Negotiating Multicollinearity with Spike-and-Slab Priors.

    PubMed

    Ročková, Veronika; George, Edward I

    2014-08-01

    In multiple regression under the normal linear model, the presence of multicollinearity is well known to lead to unreliable and unstable maximum likelihood estimates. This can be particularly troublesome for the problem of variable selection where it becomes more difficult to distinguish between subset models. Here we show how adding a spike-and-slab prior mitigates this difficulty by filtering the likelihood surface into a posterior distribution that allocates the relevant likelihood information to each of the subset model modes. For identification of promising high posterior models in this setting, we consider three EM algorithms, the fast closed form EMVS version of Rockova and George (2014) and two new versions designed for variants of the spike-and-slab formulation. For a multimodal posterior under multicollinearity, we compare the regions of convergence of these three algorithms. Deterministic annealing versions of the EMVS algorithm are seen to substantially mitigate this multimodality. A single simple running example is used for illustration throughout.

  15. Development of a Random Field Model for Gas Plume Detection in Multiple LWIR Images.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heasler, Patrick G.

    This report develops a random field model that describes gas plumes in LWIR remote sensing images. The random field model serves as a prior distribution that can be combined with LWIR data to produce a posterior that determines the probability that a gas plume exists in the scene and also maps the most probable location of any plume. The random field model is intended to work with a single pixel regression estimator--a regression model that estimates gas concentration on an individual pixel basis.

  16. Ockham's razor and Bayesian analysis. [statistical theory for systems evaluation

    NASA Technical Reports Server (NTRS)

    Jefferys, William H.; Berger, James O.

    1992-01-01

    'Ockham's razor', the ad hoc principle enjoining the greatest possible simplicity in theoretical explanations, is presently shown to be justifiable as a consequence of Bayesian inference; Bayesian analysis can, moreover, clarify the nature of the 'simplest' hypothesis consistent with the given data. By choosing the prior probabilities of hypotheses, it becomes possible to quantify the scientific judgment that simpler hypotheses are more likely to be correct. Bayesian analysis also shows that a hypothesis with fewer adjustable parameters intrinsically possesses an enhanced posterior probability, due to the clarity of its predictions.

  17. The changes of lumbar muscle flexion-relaxation phenomenon due to antero-posteriorly slanted ground surfaces.

    PubMed

    Hu, Boyi; Ning, Xiaopeng; Dai, Fei; Almuhaidib, Ibrahim

    2016-09-01

    Uneven ground surface is a common occupational injury risk factor in industries such as agriculture, fishing, transportation and construction. Studies have shown that antero-posteriorly slanted ground surfaces could reduce spinal stability and increase the risk of falling. In this study, the influence of antero-posteriorly slanted ground surfaces on lumbar flexion-relaxation responses was investigated. Fourteen healthy participants performed sagittally symmetric and asymmetric trunk bending motions on one flat and two antero-posteriorly slanted surfaces (-15° (uphill facing) and 15° (downhill facing)), while lumbar muscle electromyography and trunk kinematics were recorded. Results showed that standing on a downhill facing slanted surface delays the onset of lumbar muscle flexion-relaxation phenomenon (FRP), while standing on an uphill facing ground causes lumbar muscle FRP to occur earlier. In addition, compared to symmetric bending, when performing asymmetric bending, FRP occurred earlier on the contralateral side of lumbar muscles and significantly smaller maximum lumbar flexion and trunk inclination angles were observed. Practitioner Summary: Uneven ground surface is a common risk factor among a number of industries. In this study, we investigated the influence of antero-posteriorly slanted ground surface on trunk biomechanics during trunk bending. Results showed the slanted surface alters the lumbar tissue load-sharing mechanism in both sagittally symmetric and asymmetric bending.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The purpose of the computer program is to generate system matrices that model data acquisition process in dynamic single photon emission computed tomography (SPECT). The application is for the reconstruction of dynamic data from projection measurements that provide the time evolution of activity uptake and wash out in an organ of interest. The measurement of the time activity in the blood and organ tissue provide time-activity curves (TACs) that are used to estimate kinetic parameters. The program provides a correct model of the in vivo spatial and temporal distribution of radioactive in organs. The model accounts for the attenuation ofmore » the internal emitting radioactivity, it accounts for the vary point response of the collimators, and correctly models the time variation of the activity in the organs. One important application where the software is being used in a measuring the arterial input function (AIF) in a dynamic SPECT study where the data are acquired from a slow camera rotation. Measurement of the arterial input function (AIF) is essential to deriving quantitative estimates of regional myocardial blood flow using kinetic models. A study was performed to evaluate whether a slowly rotating SPECT system could provide accurate AIF's for myocardial perfusion imaging (MPI). Methods: Dynamic cardiac SPECT was first performed in human subjects at rest using a Phillips Precedence SPECT/CT scanner. Dynamic measurements of Tc-99m-tetrofosmin in the myocardium were obtained using an infusion time of 2 minutes. Blood input, myocardium tissue and liver TACs were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. Results: The spatiotemporal 4D ML-EM reconstructions gave more accurate reconstructions that did standard frame-by-frame 3D ML-EM reconstructions. From additional computer simulations and phantom studies, it was determined that a 1 minute infusion with a SPECT system rotation speed providing 180 degrees of projection data every 54s can produce measurements of blood pool and myocardial TACs. This has important application in the circulation of coronary flow reserve using rest/stress dynamic cardiac SPECT. They system matrices are used in maximum likelihood and maximum a posterior formulations in estimation theory where through iterative algorithms (conjugate gradient, expectation maximization, or maximum a posteriori probability algorithms) the solution is determined that maximizes a likelihood or a posteriori probability function.« less

  19. Stress and displacement pattern evaluation using two different palatal expanders in unilateral cleft lip and palate: a three-dimensional finite element analysis.

    PubMed

    Mathew, Anoop; Nagachandran, K S; Vijayalakshmi, Devaki

    2016-12-01

    In this finite element (FE) study, the stress distribution and displacement pattern was evaluated in the mid-palatal area and around circum-maxillary sutures exerted by bone-borne palatal expander (BBPE) in comparison with conventional HYRAX rapid palatal expander in unilateral cleft lip and palate. Computed tomography scan images of a patient with unilateral cleft palate was used to create a FE model of the maxillary bone along with circum-maxillary sutures. A three-dimensional model of the conventional HYRAX (Hygienic Rapid Expander) expander and custom-made BBPE was created by laser scanning and programmed into the FE model. With the BBPE, the maximum stress was observed at the implant insertion site, whereas with the conventional HYRAX expander, it was at the dentition level. Among the circum-maxillary sutures, the zygomaticomaxillary suture experienced maximum stress followed by the zygomaticotemporal and nasomaxillary sutures. Displacement in the X-axis (transverse) was highest on the cleft side, and in the Y-axis (antero-posterior), it was highest in the posterior region in the BBPE. The total displacement was observed maximum in the mid-palatal cleft area in the BBPE, and it produced true skeletal expansion at the alveolar level without any dental tipping when compared with the conventional HYRAX expander.

  20. Left crossed fused renal ectopia L-shaped kidney type, with double nutcracker syndrome (anterior and posterior).

    PubMed

    Pupca, Gheorghe; Miclăuş, Graţian Dragoslav; Bucuraş, Viorel; Iacob, Nicoleta; Sas, Ioan; Matusz, Petru; Tubbs, R Shane; Loukas, Marios

    2014-01-01

    Crossed fused renal ectopia (CFRE) is the second most common fusion anomalies (FAs) of the kidneys after horseshoe kidney. Crossed fused renal ectopia (CFRE) results from one kidney crossing over to the opposite side and subsequent fusion of the parenchyma of the two kidneys. We report, by multidetector-row computed tomography (MDCT) angiography, an extremely rare case of a left CFRE (L-shaped kidney type), consisting of multiple renal arteries (one main renal artery for the upper renal parenchyma, and three renal arteries (one main and two additional) for the lower renal parenchyma) and two left renal veins, which produced a double nutcracker syndrome (both anterior and posterior). The L-shaped left kidney has a maximum length of 18.5 cm, a maximum width of 10.2 cm, and a maximum thickness of 5.3 cm. The upper pole of the kidney is located at the level of the lower third of T12 vertebral body (4.6 cm left to the mediosagittal plan); the lower pole is located along the lower half of the L5 vertebral body (1.5 cm left to the mediosagittal plan). The following case will focus on the relevant anatomy, embryology, and the clinical significance of this entity.

  1. A biomechanical analysis of the self-retaining pedicle hook device in posterior spinal fixation

    PubMed Central

    van Laar, Wilbert; Meester, Rinse J.; Smit, Theo H.

    2007-01-01

    Regular hooks lack initial fixation to the spine during spinal deformity surgery. This runs the risk of posterior hook dislodgement during manipulation and correction of the spinal deformity, that may lead to loss of correction, hook migration, and post-operative junctional kyphosis. To prevent hook dislodgement during surgery, a self-retaining pedicle hook device (SPHD) is available that is made up of two counter-positioned hooks forming a monoblock posterior claw device. The initial segmental posterior fixation strength of a SPHD, however, is unknown. A biomechanical pull-out study of posterior segmental spinal fixation in a cadaver vertebral model was designed to investigate the axial pull-out strength for a SPHD, and compared to the pull-out strength of a pedicle screw. Ten porcine lumbar vertebral bodies were instrumented in pairs with two different instrumentation constructs after measuring the bone mineral density of each individual vertebra. The instrumentation constructs were extracted employing a material testing system using axial forces. The maximum pull-out forces were recorded at the time of the construct failure. Failure of the SPHD appeared in rotation and lateral displacement, without fracturing of the posterior structures. The average pull-out strength of the SPHD was 236 N versus 1,047 N in the pedicle screws (P < 0.001). The pull-out strength of the pedicle screws showed greater correlation with the BMC compared to the SPHD (P < 0.005). The SPHD showed to provide a significant inferior segmental fixation to the posterior spine in comparison to pedicle screw fixation. Despite the beneficial characteristics of the monoblock claw construct in a SPHD, that decreases the risk of posterior hook dislodgement during surgery compared to regular hooks, the SPHD does not improve the pull-out strength in such a way that it may provide a biomechanically solid alternative to pedicle screw fixation in the posterior spine. PMID:17203270

  2. Cervix regression and motion during the course of external beam chemoradiation for cervical cancer.

    PubMed

    Beadle, Beth M; Jhingran, Anuja; Salehpour, Mohammad; Sam, Marianne; Iyer, Revathy B; Eifel, Patricia J

    2009-01-01

    To evaluate the magnitude of cervix regression and motion during external beam chemoradiation for cervical cancer. Sixteen patients with cervical cancer underwent computed tomography scanning before, weekly during, and after conventional chemoradiation. Cervix volumes were calculated to determine the extent of cervix regression. Changes in the center of mass and perimeter of the cervix between scans were used to determine the magnitude of cervix motion. Maximum cervix position changes were calculated for each patient, and mean maximum changes were calculated for the group. Mean cervical volumes before and after 45 Gy of external beam irradiation were 97.0 and 31.9 cc, respectively; mean volume reduction was 62.3%. Mean maximum changes in the center of mass of the cervix were 2.1, 1.6, and 0.82 cm in the superior-inferior, anterior-posterior, and right-left lateral dimensions, respectively. Mean maximum changes in the perimeter of the cervix were 2.3 and 1.3 cm in the superior and inferior, 1.7 and 1.8 cm in the anterior and posterior, and 0.76 and 0.94 cm in the right and left lateral directions, respectively. Cervix regression and internal organ motion contribute to marked interfraction variations in the intrapelvic position of the cervical target in patients receiving chemoradiation for cervical cancer. Failure to take these variations into account during the application of highly conformal external beam radiation techniques poses a theoretical risk of underdosing the target or overdosing adjacent critical structures.

  3. Variability of medial and posterior offset in patients with fourth-generation stemmed shoulder arthroplasty.

    PubMed

    Irlenbusch, Ulrich; Berth, Alexander; Blatter, Georges; Zenz, Peter

    2012-03-01

    Most anthropometric data on the proximal humerus has been obtained from deceased healthy individuals with no deformities. Endoprostheses are implanted for primary and secondary osteoarthritis, rheumatoid arthritis,humeral-head necrosis, fracture sequelae and other humeral-head deformities. This indicates that pathologicoanatomical variability may be greater than previously assumed. We therefore investigated a group of patients with typical shoulder replacement diagnoses, including posttraumatic and rheumatic deformities. One hundred and twenty-two patients with a double eccentrically adjustable shaft endoprosthesis served as a specific dimension gauge to determine in vivo the individual humeral-head rotation centres from the position of the adjustable prosthesis taper and the eccentric head. All prosthesis heads were positioned eccentrically.The entire adjustment range of the prosthesis of 12 mm medial/lateral and 6 mm dorsal/ventral was required. Mean values for effective offset were 5.84 mm mediolaterally[standard deviation (SD) 1.95, minimum +2, maximum +11]and 1.71 mm anteroposteriorly (SD 1.71, minimum −3,maximum 3 mm), averaging 5.16 mm (SD 1.76, minimum +2,maximum + 10). The posterior offset averaged 1.85 mm(SD 1.85, minimum −1, maximum + 6 mm). In summary, variability of the combined medial and dorsal offset of the humeral-head rotational centre determined in patients with typical underlying diagnoses in shoulder replacement was not greater than that recorded in the literature for healthy deceased patients.The range of deviation is substantial and shows the need for an adjustable prosthetic system.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beadle, Beth M.; Jhingran, Anuja; Salehpour, Mohammad

    Purpose: To evaluate the magnitude of cervix regression and motion during external beam chemoradiation for cervical cancer. Methods and Materials: Sixteen patients with cervical cancer underwent computed tomography scanning before, weekly during, and after conventional chemoradiation. Cervix volumes were calculated to determine the extent of cervix regression. Changes in the center of mass and perimeter of the cervix between scans were used to determine the magnitude of cervix motion. Maximum cervix position changes were calculated for each patient, and mean maximum changes were calculated for the group. Results: Mean cervical volumes before and after 45 Gy of external beam irradiationmore » were 97.0 and 31.9 cc, respectively; mean volume reduction was 62.3%. Mean maximum changes in the center of mass of the cervix were 2.1, 1.6, and 0.82 cm in the superior-inferior, anterior-posterior, and right-left lateral dimensions, respectively. Mean maximum changes in the perimeter of the cervix were 2.3 and 1.3 cm in the superior and inferior, 1.7 and 1.8 cm in the anterior and posterior, and 0.76 and 0.94 cm in the right and left lateral directions, respectively. Conclusions: Cervix regression and internal organ motion contribute to marked interfraction variations in the intrapelvic position of the cervical target in patients receiving chemoradiation for cervical cancer. Failure to take these variations into account during the application of highly conformal external beam radiation techniques poses a theoretical risk of underdosing the target or overdosing adjacent critical structures.« less

  5. Probabilistic description of probable maximum precipitation

    NASA Astrophysics Data System (ADS)

    Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin

    2017-04-01

    Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.

  6. International comparative evaluation of knee replacement with fixed or mobile-bearing posterior-stabilized prostheses.

    PubMed

    Graves, Stephen; Sedrakyan, Art; Baste, Valborg; Gioe, Terence J; Namba, Robert; Martínez Cruz, Olga; Stea, Susanna; Paxton, Elizabeth; Banerjee, Samprit; Isaacs, Abby J; Robertsson, Otto

    2014-12-17

    Posterior-stabilized total knee prostheses were introduced to address instability secondary to loss of posterior cruciate ligament function, and they have either fixed or mobile bearings. Mobile bearings were developed to improve the function and longevity of total knee prostheses. In this study, the International Consortium of Orthopaedic Registries used a distributed health data network to study a large cohort of posterior-stabilized prostheses to determine if the outcome of a posterior-stabilized total knee prosthesis differs depending on whether it has a fixed or mobile-bearing design. Aggregated registry data were collected with a distributed health data network that was developed by the International Consortium of Orthopaedic Registries to reduce barriers to participation (e.g., security, proprietary, legal, and privacy issues) that have the potential to occur with the alternate centralized data warehouse approach. A distributed health data network is a decentralized model that allows secure storage and analysis of data from different registries. Each registry provided data on mobile and fixed-bearing posterior-stabilized prostheses implanted between 2001 and 2010. Only prostheses associated with primary total knee arthroplasties performed for the treatment of osteoarthritis were included. Prostheses with all types of fixation were included except for those with the rarely used reverse hybrid (cementless tibial and cemented femoral components) fixation. The use of patellar resurfacing was reported. The outcome of interest was time to first revision (for any reason). Multivariate meta-analysis was performed with linear mixed models with survival probability as the unit of analysis. This study includes 137,616 posterior-stabilized knee prostheses; 62% were in female patients, and 17.6% had a mobile bearing. The results of the fixed-effects model indicate that in the first year the mobile-bearing posterior-stabilized prostheses had a significantly higher hazard ratio (1.86) than did the fixed-bearing posterior-stabilized prostheses (95% confidence interval, 1.28 to 2.7; p = 0.001). For all other time intervals, the mobile-bearing posterior-stabilized prostheses had higher hazard ratios; however, these differences were not significant. Mobile-bearing posterior-stabilized prostheses had an increased rate of revision compared with fixed-bearing posterior-stabilized prostheses. This difference was evident in the first year. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.

  7. Defecatory dysfunction and fecal incontinence in women with or without posterior vaginal wall prolapse as measured by pelvic organ prolapse quantification (POP-Q).

    PubMed

    Augusto, Kathiane Lustosa; Bezerra, Leonardo Robson Pinheiro Sobreira; Murad-Regadas, Sthela Maria; Vasconcelos Neto, José Ananias; Vasconcelos, Camila Teixeira Moreira; Karbage, Sara Arcanjo Lino; Bilhar, Andreisa Paiva Monteiro; Regadas, Francisco Sérgio Pinheiro

    2017-07-01

    Pelvic Floor Dysfunction is a complex condition that may be asymptomatic or may involve a loto f symptoms. This study evaluates defecatory dysfunction, fecal incontinence, and quality of life in relation to presence of posterior vaginal prolapse. 265 patients were divided into two groups according to posterior POP-Q stage: posterior POP-Q stage ≥2 and posterior POP-Q stage <2. The two groups were compared regarding demographic and clinical data; overall POP-Q stage, percentage of patients with defecatory dysfunction, percentage of patients with fecal incontinence, pelvic floor muscle strength, and quality of life scores. The correlation between severity of the prolapse and severity of constipation was calculated using ρ de Spearman (rho). Women with Bp stage ≥2 were significantly older and had significantly higher BMI, numbers of pregnancies and births, and overall POP-Q stage than women with stage <2. No significant differences between the groups were observed regarding proportion of patients with defecatory dysfunction or incontinence, pelvic floor muscle strength, quality of life (ICIQ-SF), or sexual impact (PISQ-12). POP-Q stage did not correlate with severity of constipation and incontinence. General quality of life perception on the SF-36 was significantly worse in patients with POP-Q stage ≥2 than in those with POP-Q stage <2. The lack of a clinically important association between the presence of posterior vaginal prolapse and symptoms of constipation or anal incontinence leads us to agree with the conclusion that posterior vaginal prolapse probably is not an independent cause defecatory dysfunction or fecal incontinence. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Bayesian Retrieval of Complete Posterior PDFs of Oceanic Rain Rate From Microwave Observations

    NASA Technical Reports Server (NTRS)

    Chiu, J. Christine; Petty, Grant W.

    2005-01-01

    This paper presents a new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measurements Mission (TRMM) Microwave Imager (TMI) over the ocean, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes Theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance our understanding of theoretical benefits of the Bayesian approach, we have conducted sensitivity analyses based on two synthetic datasets for which the true conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak, due to saturation effects. It is also suggested that the choice of the estimators and the prior information are both crucial to the retrieval. In addition, the performance of our Bayesian algorithm is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.

  9. Using Discrete Loss Functions and Weighted Kappa for Classification: An Illustration Based on Bayesian Network Analysis

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Lenaburg, Lubella

    2009-01-01

    In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…

  10. New KF-PP-SVM classification method for EEG in brain-computer interfaces.

    PubMed

    Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian

    2014-01-01

    Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.

  11. Laser Raman detection for oral cancer based on an adaptive Gaussian process classification method with posterior probabilities

    NASA Astrophysics Data System (ADS)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming

    2013-03-01

    The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.

  12. Bayesian inference for the genetic control of water deficit tolerance in spring wheat by stochastic search variable selection.

    PubMed

    Safari, Parviz; Danyali, Syyedeh Fatemeh; Rahimi, Mehdi

    2018-06-02

    Drought is the main abiotic stress seriously influencing wheat production. Information about the inheritance of drought tolerance is necessary to determine the most appropriate strategy to develop tolerant cultivars and populations. In this study, generation means analysis to identify the genetic effects controlling grain yield inheritance in water deficit and normal conditions was considered as a model selection problem in a Bayesian framework. Stochastic search variable selection (SSVS) was applied to identify the most important genetic effects and the best fitted models using different generations obtained from two crosses applying two water regimes in two growing seasons. The SSVS is used to evaluate the effect of each variable on the dependent variable via posterior variable inclusion probabilities. The model with the highest posterior probability is selected as the best model. In this study, the grain yield was controlled by the main effects (additive and non-additive effects) and epistatic. The results demonstrate that breeding methods such as recurrent selection and subsequent pedigree method and hybrid production can be useful to improve grain yield.

  13. Data analysis in emission tomography using emission-count posteriors

    NASA Astrophysics Data System (ADS)

    Sitek, Arkadiusz

    2012-11-01

    A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.

  14. General Metropolis-Hastings jump diffusions for automatic target recognition in infrared scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1997-04-01

    To locate and recognize ground-based targets in forward- looking IR (FLIR) images, 3D faceted models with associated pose parameters are formulated to accommodate the variability found in FLIR imagery. Taking a Bayesian approach, scenes are simulated from the emissive characteristics of the CAD models and compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. To accommodate scenes with variable numbers of targets, the posterior distribution is defined over parameter vectors of varying dimension. An inference algorithm based on Metropolis-Hastings jump- diffusion processes empirically samples from the posterior distribution, generating configurations of templates and transformations that match the collected sensor data with high probability. The jumps accommodate the addition and deletion of targets and the estimation of target identities; diffusions refine the hypotheses by drifting along the gradient of the posterior distribution with respect to the orientation and position parameters. Previous results on jumps strategies analogous to the Metropolis acceptance/rejection algorithm, with proposals drawn from the prior and accepted based on the likelihood, are extended to encompass general Metropolis-Hastings proposal densities. In particular, the algorithm proposes moves by drawing from the posterior distribution over computationally tractible subsets of the parameter space. The algorithm is illustrated by an implementation on a Silicon Graphics Onyx/Reality Engine.

  15. Topographical relations between the posterior cricothyroid ligament and the inferior laryngeal nerve.

    PubMed

    Reidenbach, M M

    1995-01-01

    The posterior cricothyroid ligament and its topographic relation to the inferior laryngeal nerve were studied in 54 human adult male and female larynges. Fourteen specimens were impregnated with curable polymers and cut into 600-800 microns sections along different planes. Forty formalin-fixed hemi-larynges were dissected and various measurements were made. The posterior cricothyroid ligament provides a dorsal strengthening for the joint capsule of the cricothyroid joint. Its fibers spread in a fan-like manner from a small area of origin at the cricoid cartilage to a more extended area of attachment at the inferior thyroid cornu. The ligament consists of one (7.5%) to four (12.5%), in most cases of three (45.0%) or two (35.0%), individual parts oriented from mediocranial to latero-caudal. The inferior laryngeal nerve courses immediately dorsal to the ligament. In 60% it is covered by fibers of the posterior cricoarytenoid muscle, in the remaining 40% it is not. In this latter topographic situation there is almost no soft tissue interposed between the nerve and the hypopharynx. Therefore, the nerve may be exposed to pressure forces exerted from dorsally. It may be pushed against the unyielding posterior cricothyroid ligament and suffer functional or structural impairment. Probably, this mechanism may explain some of the laryngeal nerve lesions described in the literature after insertion of gastric tubes.

  16. Biomechanical Analysis of the Closed Kinetic Chain Upper-Extremity Stability Test.

    PubMed

    Tucci, Helga T; Felicio, Lilian R; McQuade, Kevin J; Bevilaqua-Grossi, Debora; Camarini, Paula Maria Ferreira; Oliveira, Anamaria S

    2017-01-01

    The closed kinetic chain upper-extremity stability (CKCUES) test is a functional test for the upper extremity performed in the push-up position, where individuals support their body weight on 1 hand placed on the ground and swing the opposite hand until touching the hand on the ground, then switch hands and repeat the process as fast as possible for 15 s. To study scapular kinematic and kinetic measures during the CKCUES test for 3 different distances between hands. Experimental. Laboratory. 30 healthy individuals (15 male, 15 female). Participants performed 3 repetitions of the test at 3 distance conditions: original (36 in), interacromial, and 150% interacromial distance between hands. Participants completed a questionnaire on pain intensity and perceived exertion before and after the procedures. Scapular internal/external rotation, upward/downward rotation, and posterior/anterior tilting kinematics and kinetic data on maximum force and time to maximum force were measured bilaterally in all participants. Percentage of body weight on upper extremities was calculated. Data analyses were based on the total numbers of hand touches performed for each distance condition, and scapular kinematics and kinetic values were averaged over the 3 trials. Scapular kinematics, maximum force, and time to maximum force were compared for the 3 distance conditions within each gender. Significance level was set at α = .05. Scapular internal rotation, posterior tilting, and upward rotation were significantly greater in the dominant side for both genders. Scapular upward rotation was significantly greater in original distance than interacromial distance in swing phase. Time to maximum force in women was significantly greater in the dominant side. CKCUES test kinematic and kinetic measures were not different among 3 conditions based on distance between hands. However, the test might not be suitable for initial or mild-level rehabilitation due to its challenging requirements.

  17. Reliability and comparison of trunk and pelvis angles, arm distance and center of pressure in the seated functional reach test with and without foot support in children.

    PubMed

    Radtka, Sandra; Zayac, Jacqueline; Goldberg, Krystyna; Long, Michael; Ixanov, Rustem

    2017-03-01

    This study determined test-retest reliability of trunk and pelvis joint angles, arm distance and center of pressure (COP) excursion for the seated functional reach test (FRT) and compared these variables during the seated FRT with and without foot support. Fifteen typically developing children (age 9.3±4.1years) participated. Trunk and pelvis joint angles, arm distance, and COP excursion were collected on two days using three-dimensional motion analysis and a force plate while subjects reached maximally with and without foot support in the anterior, anterior/lateral, lateral, posterior/lateral directions. Age, weight, height, trunk and arm lengths were correlated (p<0.01) with maximum arm distance reached. Maximum arm distance, trunk and pelvis joint angles, and COP with and without foot support were not significant (p<0.05) for the two test periods. Excellent reliability (ICCs>0.75) was found for maximum arm distance reached in all four directions in the seated FRT with and without foot support. Most trunk and pelvis joint angles and COP excursions during maximum reach in all four directions showed excellent to fair reliability (ICCs>0.40-0.75). Reaching with foot support in all directions was significantly greater (p<0.05) than without foot support; however, most COP excursions and trunk and pelvic angles were not significantly different. Findings support the addition of anterior/lateral and posterior/lateral reaching directions in the seated FRT. Trunk and pelvis movement analysis is important to examine in the seated FRT to determine the specific movement strategies needed for maximum reaching without loss of balance. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Physiologic Inter-eye Differences in Monkey Optic Nerve Head Architecture and Their Relation to Changes in Early Experimental Glaucoma

    PubMed Central

    Yang, Hongli; Downs, J. Crawford; Burgoyne, Claude F.

    2009-01-01

    Purpose To characterize physiologic inter-eye differences in optic nerve head (ONH) architecture within six normal rhesus monkeys and compare them to inter-eye differences within three previously-reported cynomolgus monkeys with early experimental glaucoma (EEG). Methods Trephinated ONH and peripapillary sclera from both eyes of six normal monkeys were serial sectioned, 3D reconstructed, 3D delineated and parameterized. For each normal animal, and each parameter, physiologic inter-eye difference (PID) was calculated (both overall and regionally) by converting all OS data to OD configuration and subtracting the OS from the OD value and Physiologic Inter-eye Percent Difference (PIPD) was calculated as the PID divided by the measurement mean of the two eyes. For each EEG monkey, inter-eye (EEG minus normal) differences and percent differences for each parameter overall and regionally were compared to the PID and PIPD Maximums. Results For all parameters the PID Maximums were relatively small overall. Compared to overall PID maximums, overall inter-eye differences in EEG monkeys were greatest for laminar deformation and thickening, posterior scleral canal enlargement, cupping and prelaminar neural tissue thickening. Compared to the regional PID Maximums, the lamina cribrosa was posteriorly deformed centrally, inferiorly, inferonasally and superiorly and was thickened centrally. The prelaminar neural tissues were thickened inferiorly, inferonasally and superiorly. Conclusion These data provide the first characterization of PID/PIPD maximums for ONH neural and connective tissue parameters in normal monkeys and serve to further clarify the location and character of early ONH change in experimental glaucoma. However, because of the species differences, the findings in EEG need to be confirmed within EEG rhesus monkey eyes. PMID:18775866

  19. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  20. Comparative Biomechanical Study on Contact Alterations After Lateral Meniscus Posterior Root Avulsion, Transosseous Reinsertion, and Total Meniscectomy.

    PubMed

    Perez-Blanca, Ana; Espejo-Baena, Alejandro; Amat Trujillo, Daniel; Prado Nóvoa, María; Espejo-Reina, Alejandro; Quintero López, Clara; Ezquerro Juanco, Francisco

    2016-04-01

    To compare the effects of lateral meniscus posterior root avulsion left in situ, its repair, and meniscectomy on contact pressure distribution in both tibiofemoral compartments at different flexion angles. Eight cadaveric knees were tested under compressive 1000 N load for 4 lateral meniscus conditions (intact, posterior root avulsion, transosseous root repair, and total meniscectomy) at flexion angles 0°, 30°, 60°, and 90°. Contact area and pressure distribution were registered using K-scan pressure sensors inserted between menisci and tibial plateau. In the lateral compartment, root detachment decreased contact area (P = .017, 0° and 30°; P = .012, 60° and 90°) and increased mean (P = .012, all angles) and maximum (P = .025, 0° and 30°; P = .017, 60°; P = .012, 90°) pressures relative to intact condition. Repair restored all measured parameters close to intact at 0°, but effectiveness decreased with flexion angle, yielding no significant effect at 90°. Meniscectomy produced higher decreases than root avulsion in contact area (P = .012, 0° and 90°; P = .05, 30° and 60°) and increases in mean (P = .017, 0° and 30°; P = .018, 90°) and maximum pressure (P = .012, 0°; P = .036, 30°). In the medial compartment, lesion changed the contact area at high flexion angles only, while meniscectomy induced greater changes at all angles. Lateral meniscus posterior root avulsion generates significant alterations in contact area and pressures at lateral knee compartment for flexion angles between full extension and 90°. Meniscectomy causes greater disorders than the avulsion left in situ. Transosseous repair with a single suture restores these alterations to conditions close to intact at 0° and 30° but not at 60° and 90°. Altered contact mechanics after lateral meniscus posterior root avulsion might have degenerative consequences. Transosseous repair with one suture should be revised to effectively restore contact mechanics at high flexion angles. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  1. Characteristics of Posterior Corneal Astigmatism in Different Stages of Keratoconus.

    PubMed

    Aslani, Fereshteh; Khorrami-Nejad, Masoud; Aghazadeh Amiri, Mohammad; Hashemian, Hesam; Askarizadeh, Farshad; Khosravi, Bahram

    2018-01-01

    To evaluate the magnitudes and axis orientation of anterior corneal astigmatism (ACA) and posterior corneal astigmatism (PCA), the ratio of ACA to PCA, and the correlation between ACA and PCA in the different stages of keratoconus (KCN). This retrospective case series comprised 161 eyes of 161 patients with KCN (104 men, 57 women; mean age, 22.35 ± 6.10 years). The participants were divided into four subgroups according to the Amsler-Krumeich classification. A Scheimpflug imaging system was used to measure the magnitude and axis orientation of ACA and PCA. The posterior-anterior corneal astigmatism ratio was also calculated. The results were compared among different subgroups. The average amounts of anterior, posterior, and total corneal astigmatism were 4.08 ± 2.21 diopters (D), 0.86 ± 0.46 D, and 3.50 ± 1.94 D, respectively. With-the-rule, against-the-rule, and oblique astigmatisms of the posterior surface of the cornea were found in 61 eyes (37.9%), 67 eyes (41.6%), and 33 eyes (20.5%), respectively; corresponding figures in the anterior corneal surface were 55 eyes (32.4%), 56 eyes (34.8%), and 50 eyes (31.1%), respectively. A strong correlation ( P ≤ 0.001, r = 0.839) was found between ACA and PCA in the different stages of KCN; the correlation was weaker in eyes with grade 3 ( P ≤ 0.001, r = 0.711) and grade 4 ( P ≤ 0.001, r = 0.717) KCN. The maximum posterior-anterior corneal astigmatism ratio (PCA/ACA, 0.246) was found in patients with stage 1 KCN. Corneal astigmatism in anterior surface was more affected than posterior surface by increasing in the KCN severity, although PCA was more affected than ACA in an early stage of KCN.

  2. Dynamic balance deficits in individuals with chronic ankle instability compared to ankle sprain copers 1 year after a first-time lateral ankle sprain injury.

    PubMed

    Doherty, Cailbhe; Bleakley, Chris; Hertel, Jay; Caulfield, Brian; Ryan, John; Delahunt, Eamonn

    2016-04-01

    To quantify the dynamic balance deficits that characterise a group with chronic ankle instability compared to lateral ankle sprain copers and non-injured controls using kinematic and kinetic outcomes. Forty-two participants with chronic ankle instability and twenty-eight lateral ankle sprain copers were initially recruited within 2 weeks of sustaining a first-time, acute lateral ankle sprain and required to attend our laboratory 1 year later to complete the current study protocol. An additional group of non-injured individuals were also recruited to act as a control group. All participants completed the anterior, posterior-lateral and posterior-medial reach directions of the star excursion balance test. Sagittal plane kinematics of the lower extremity and associated fractal dimension of the centre of pressure path were also acquired. Participants with chronic ankle instability displayed poorer performance in the anterior, posterior-medial and posterior-lateral reach directions compared with controls bilaterally, and in the posterior-lateral direction compared with lateral ankle sprain copers on their 'involved' limb only. These performance deficits in the posterior-lateral and posterior-medial directions were associated with reduced flexion and dorsiflexion displacements at the hip, knee and ankle at the point of maximum reach, and coincided with reduced complexity of the centre of pressure path. In comparison with lateral ankle sprain copers and controls, participants with chronic ankle instability were characterised by dynamic balance deficits as measured using the SEBT. This was attested to reduced sagittal plane motions at the hip, knee and ankle joints, and reduced capacity of the stance limb to avail of its supporting base. III.

  3. Investigation of Latent Traces Using Infrared Reflectance Hyperspectral Imaging

    NASA Astrophysics Data System (ADS)

    Schubert, Till; Wenzel, Susanne; Roscher, Ribana; Stachniss, Cyrill

    2016-06-01

    The detection of traces is a main task of forensics. Hyperspectral imaging is a potential method from which we expect to capture more fluorescence effects than with common forensic light sources. This paper shows that the use of hyperspectral imaging is suited for the analysis of latent traces and extends the classical concept to the conservation of the crime scene for retrospective laboratory analysis. We examine specimen of blood, semen and saliva traces in several dilution steps, prepared on cardboard substrate. As our key result we successfully make latent traces visible up to dilution factor of 1:8000. We can attribute most of the detectability to interference of electromagnetic light with the water content of the traces in the shortwave infrared region of the spectrum. In a classification task we use several dimensionality reduction methods (PCA and LDA) in combination with a Maximum Likelihood classifier, assuming normally distributed data. Further, we use Random Forest as a competitive approach. The classifiers retrieve the exact positions of labelled trace preparation up to highest dilution and determine posterior probabilities. By modelling the classification task with a Markov Random Field we are able to integrate prior information about the spatial relation of neighboured pixel labels.

  4. Bayesian explorations of fault slip evolution over the earthquake cycle

    NASA Astrophysics Data System (ADS)

    Duputel, Z.; Jolivet, R.; Benoit, A.; Gombert, B.

    2017-12-01

    The ever-increasing amount of geophysical data continuously opens new perspectives on fundamental aspects of the seismogenic behavior of active faults. In this context, the recent fleet of SAR satellites including Sentinel-1 and COSMO-SkyMED permits the use of InSAR for time-dependent slip modeling with unprecedented resolution in time and space. However, existing time-dependent slip models rely on spatial smoothing regularization schemes, which can produce unrealistically smooth slip distributions. In addition, these models usually do not include uncertainty estimates thereby reducing the utility of such estimates. Here, we develop an entirely new approach to derive probabilistic time-dependent slip models. This Markov-Chain Monte Carlo method involves a series of transitional steps to predict and update posterior Probability Density Functions (PDFs) of slip as a function of time. We assess the viability of our approach using various slow-slip event scenarios. Using a dense set of SAR images, we also use this method to quantify the spatial distribution and temporal evolution of slip along a creeping segment of the North Anatolian Fault. This allows us to track a shallow aseismic slip transient lasting for about a month with a maximum slip of about 2 cm.

  5. Statistical inference of seabed sound-speed structure in the Gulf of Oman Basin.

    PubMed

    Sagers, Jason D; Knobles, David P

    2014-06-01

    Addressed is the statistical inference of the sound-speed depth profile of a thick soft seabed from broadband sound propagation data recorded in the Gulf of Oman Basin in 1977. The acoustic data are in the form of time series signals recorded on a sparse vertical line array and generated by explosive sources deployed along a 280 km track. The acoustic data offer a unique opportunity to study a deep-water bottom-limited thickly sedimented environment because of the large number of time series measurements, very low seabed attenuation, and auxiliary measurements. A maximum entropy method is employed to obtain a conditional posterior probability distribution (PPD) for the sound-speed ratio and the near-surface sound-speed gradient. The multiple data samples allow for a determination of the average error constraint value required to uniquely specify the PPD for each data sample. Two complicating features of the statistical inference study are addressed: (1) the need to develop an error function that can both utilize the measured multipath arrival structure and mitigate the effects of data errors and (2) the effect of small bathymetric slopes on the structure of the bottom interacting arrivals.

  6. Estimating Tree Height-Diameter Models with the Bayesian Method

    PubMed Central

    Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733

  7. Estimating tree height-diameter models with the Bayesian method.

    PubMed

    Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  8. A discrimination method for the detection of pneumonia using chest radiograph.

    PubMed

    Noor, Norliza Mohd; Rijal, Omar Mohd; Yunus, Ashari; Abu-Bakar, S A R

    2010-03-01

    This paper presents a statistical method for the detection of lobar pneumonia when using digitized chest X-ray films. Each region of interest was represented by a vector of wavelet texture measures which is then multiplied by the orthogonal matrix Q(2). The first two elements of the transformed vectors were shown to have a bivariate normal distribution. Misclassification probabilities were estimated using probability ellipsoids and discriminant functions. The result of this study recommends the detection of pneumonia by constructing probability ellipsoids or discriminant function using maximum energy and maximum column sum energy texture measures where misclassification probabilities were less than 0.15. 2009 Elsevier Ltd. All rights reserved.

  9. Cartilage can be thicker in advanced osteoarthritic knees: a tridimensional quantitative analysis of cartilage thickness at posterior aspect of femoral condyles.

    PubMed

    Omoumi, Patrick; Babel, Hugo; Jolles, Brigitte M; Favre, Julien

    2018-04-16

    To test, through tridimensional analysis, whether (1) cartilage thickness at the posterior aspect of femoral condyles differs in knees with medial femorotibial osteoarthritis (OA) compared to non-OA knees; (2) the location of the thickest cartilage at the posterior aspect of femoral condyles differs between OA and non-OA knees. CT arthrograms of knees without radiographic OA (n = 30) and with severe medial femorotibial OA (n = 30) were selected retrospectively from patients over 50 years of age. The groups did not differ in gender, age and femoral size. CT arthrograms were segmented to measure the mean cartilage thickness, the maximal cartilage thickness and its location in a region of interest at the posterior aspect of condyles. For the medial condyle, mean and maximum cartilage thicknesses were statistically significantly higher in OA knees compared to non-OA knees [1.66 vs 1.46 mm (p = 0.03) and 2.56 vs 2.14 mm (p = 0.003), respectively]. The thickest cartilage was located in the half most medial aspect of the posterior medial condyle for both groups, without significant difference between groups. For the lateral condyle, no statistically significant difference between non-OA and OA knees was found (p ≥ 0.17). Cartilage at the posterior aspect of the medial condyle, but not the lateral condyle, is statistically significantly thicker in advanced medial femorotibial OA knees compared to non-OA knees. The thickest cartilage was located in the half most medial aspect of the posterior medial condyle. These results will serve as the basis for future research to determine the histobiological processes involved in this thicker cartilage. Advances in knowledge: This study, through a quantitative tridimensional approach, shows that cartilage at the posterior aspect of the medial condyles is thicker in severe femorotibial osteoarthritic knees compared to non-OA knees. In the posterior aspect of the medial condyle, the thickest cartilage is located in the vicinity of the center of the half most medial aspect of the posterior medial condyle. These results will serve as the basis for future research to determine the histobiological processes involved in this thicker cartilage.

  10. The neural correlates of subjective utility of monetary outcome and probability weight in economic and in motor decision under risk

    PubMed Central

    Wu, Shih-Wei; Delgado, Mauricio R.; Maloney, Laurence T.

    2011-01-01

    In decision under risk, people choose between lotteries that contain a list of potential outcomes paired with their probabilities of occurrence. We previously developed a method for translating such lotteries to mathematically equivalent motor lotteries. The probability of each outcome in a motor lottery is determined by the subject’s noise in executing a movement. In this study, we used functional magnetic resonance imaging in humans to compare the neural correlates of monetary outcome and probability in classical lottery tasks where information about probability was explicitly communicated to the subjects and in mathematically equivalent motor lottery tasks where probability was implicit in the subjects’ own motor noise. We found that activity in the medial prefrontal cortex (mPFC) and the posterior cingulate cortex (PCC) quantitatively represent the subjective utility of monetary outcome in both tasks. For probability, we found that the mPFC significantly tracked the distortion of such information in both tasks. Specifically, activity in mPFC represents probability information but not the physical properties of the stimuli correlated with this information. Together, the results demonstrate that mPFC represents probability from two distinct forms of decision under risk. PMID:21677166

  11. The neural correlates of subjective utility of monetary outcome and probability weight in economic and in motor decision under risk.

    PubMed

    Wu, Shih-Wei; Delgado, Mauricio R; Maloney, Laurence T

    2011-06-15

    In decision under risk, people choose between lotteries that contain a list of potential outcomes paired with their probabilities of occurrence. We previously developed a method for translating such lotteries to mathematically equivalent "motor lotteries." The probability of each outcome in a motor lottery is determined by the subject's noise in executing a movement. In this study, we used functional magnetic resonance imaging in humans to compare the neural correlates of monetary outcome and probability in classical lottery tasks in which information about probability was explicitly communicated to the subjects and in mathematically equivalent motor lottery tasks in which probability was implicit in the subjects' own motor noise. We found that activity in the medial prefrontal cortex (mPFC) and the posterior cingulate cortex quantitatively represent the subjective utility of monetary outcome in both tasks. For probability, we found that the mPFC significantly tracked the distortion of such information in both tasks. Specifically, activity in mPFC represents probability information but not the physical properties of the stimuli correlated with this information. Together, the results demonstrate that mPFC represents probability from two distinct forms of decision under risk.

  12. Third nerve palsy caused by compression of the posterior communicating artery aneurysm does not depend on the size of the aneurysm, but on the distance between the ICA and the anterior-posterior clinoid process.

    PubMed

    Anan, Mitsuhiro; Nagai, Yasuyuki; Fudaba, Hirotaka; Kubo, Takeshi; Ishii, Keisuke; Murata, Kumi; Hisamitsu, Yoshinori; Kawano, Yoshihisa; Hori, Yuzo; Nagatomi, Hirofumi; Abe, Tatsuya; Fujiki, Minoru

    2014-08-01

    Third nerve palsy (TNP) caused by a posterior communicating artery (PCoA) aneurysm is a well-known symptom of the condition, but the characteristics of unruptured PCoA aneurysm-associated third nerve palsy have not been fully evaluated. The aim of this study was to analyze the anatomical features of PCoA aneurysms that caused TNP from the viewpoint of the relationship between the ICA and the skull base. Forty-eight unruptured PCoA aneurysms were treated surgically between January 2008 and September 2013. The characteristics of the aneurysms were evaluated. Thirteen of the 48 patients (27%) had a history of TNP. The distance between the ICA and the anterior-posterior clinoid process (ICA-APC distance) was significantly shorter in the TNP group (p<0.01), but the maximum size of the aneurysms was not (p=0.534). Relatively small unruptured PCoA aneurysms can cause third nerve palsy if the ICA runs close to the skull base. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Outcome of posterior decompression with instrumented fusion surgery for K-line (-) cervical ossification of the longitudinal ligament.

    PubMed

    Saito, Junya; Maki, Satoshi; Kamiya, Koshiro; Furuya, Takeo; Inada, Taigo; Ota, Mitsutoshi; Iijima, Yasushi; Takahashi, Kazuhisa; Yamazaki, Masashi; Aramomi, Masaaki; Mannoji, Chikato; Koda, Masao

    2016-10-01

    We investigated the outcome of posterior decompression and instrumented fusion (PDF) surgery for patients with K-line (-) ossification of the posterior longitudinal ligament (OPLL) of the cervical spine, who may have a poor surgical prognosis. We retrospectively analyzed the outcome of a series of 27 patients who underwent PDF without correction of cervical alignment for K-line (-) OPLL and were followed-up for at least 1 year after surgery. We had performed double-door laminoplasty followed by posterior instrumented fusion without excessive correction of cervical spine alignment. The preoperative Japanese Orthopedic Association (JOA) score for cervical myelopathy was 8.0 points and postoperative JOA score was 11.9 points on average. The mean JOA score recovery rate was 43.6%. The average C2-C7 angle was 2.2° preoperatively and 3.1° postoperatively. The average maximum occupation ratio of OPLL was 56.7%. In conclusion, PDF without correcting cervical alignment for patients with K-line (-) OPLL showed moderate neurological recovery, which was acceptable considering K-line (-) predicts poor surgical outcomes. Thus, PDF is a surgical option for such patients with OPLL. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Dural opening/removal for combined petrosal approach: technical note.

    PubMed

    Terasaka, Shunsuke; Asaoka, Katsuyuki; Kobayashi, Hiroyuki; Sugiyama, Taku; Yamaguchi, Shigeru

    2011-03-01

    Detailed descriptions of stepwise dural opening/removal for combined petrosal approach are presented. Following maximum bone work, the first dural incision was made along the undersurface of the temporal lobe parallel to the superior petrosal sinus. Posterior extension of the dural incision was made in a curved fashion, keeping away from the transverse-sigmoid junction and taking care to preserve the vein of Labbé. A second incision was made perpendicular to the first incision. After sectioning the superior petrosal sinus around the porus trigeminus, the incision was extended toward the posterior fossa dura in the middle fossa region. The tentorium was incised toward the incisura at a point just posterior to the entrance of the trochlear nerve. A third incision was made longitudinally between the superior petrosal sinus and the jugular bulb. A final incision was initiated perpendicular to the third incision in the presigmoid region and extended parallel to the superior petrosal sinus connecting the second incision. The dural complex consisting of the temporal lobe dura, the posterior fossa dura, and the freed tentorium could then be removed. In addition to extensive bone resection, our strategic cranial base dural opening/removal can yield true advantages for the combined petrosal approach.

  15. Scanning-slit topography in patients with keratoconus.

    PubMed

    Módis, László; Németh, Gábor; Szalai, Eszter; Flaskó, Zsuzsa; Seitz, Berthold

    2017-01-01

    To evaluate the anterior and posterior corneal surfaces using scanning-slit topography and to determine the diagnostic ability of the measured corneal parameters in keratoconus. Orbscan II measurements were taken in 39 keratoconic corneas previously diagnosed by corneal topography and in 39 healthy eyes. The central minimum, maximum, and astigmatic simulated keratometry (K) and anterior axial power values were determined. Spherical and cylindrical mean power diopters were obtained at the central and at the steepest point of the cornea both on anterior and on posterior mean power maps. Pachymetry evaluations were taken at the center and paracentrally in the 3 mm zone from the center at a location of every 45 degrees. Receiver operating characteristic (ROC) analysis was used to determine the best cut-off values and to evaluate the utility of the measured parameters in identifying patients with keratoconus. The minimum, maximum and astigmatic simulated K readings were 44.80±3.06 D, 47.17±3.67 D and 2.42±1.84 D respectively in keratoconus patients and these values differed significantly ( P <0.0001 for all comparisons) from healthy subjects. For all pachymetry measurements and for anterior and posterior mean power values significant differences were found between the two groups. Moreover, anterior central cylindrical power had the best discrimination ability (area under the ROC curve=0.948). The results suggest that scanning-slit topography and pachymetry are accurate methods both for keratoconus screening and for confirmation of the diagnosis.

  16. NIMROD: a program for inference via a normal approximation of the posterior in models with random effects based on ordinary differential equations.

    PubMed

    Prague, Mélanie; Commenges, Daniel; Guedj, Jérémie; Drylewicz, Julia; Thiébaut, Rodolphe

    2013-08-01

    Models based on ordinary differential equations (ODE) are widespread tools for describing dynamical systems. In biomedical sciences, data from each subject can be sparse making difficult to precisely estimate individual parameters by standard non-linear regression but information can often be gained from between-subjects variability. This makes natural the use of mixed-effects models to estimate population parameters. Although the maximum likelihood approach is a valuable option, identifiability issues favour Bayesian approaches which can incorporate prior knowledge in a flexible way. However, the combination of difficulties coming from the ODE system and from the presence of random effects raises a major numerical challenge. Computations can be simplified by making a normal approximation of the posterior to find the maximum of the posterior distribution (MAP). Here we present the NIMROD program (normal approximation inference in models with random effects based on ordinary differential equations) devoted to the MAP estimation in ODE models. We describe the specific implemented features such as convergence criteria and an approximation of the leave-one-out cross-validation to assess the model quality of fit. In pharmacokinetics models, first, we evaluate the properties of this algorithm and compare it with FOCE and MCMC algorithms in simulations. Then, we illustrate NIMROD use on Amprenavir pharmacokinetics data from the PUZZLE clinical trial in HIV infected patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Clinical performance of a lithia disilicate-based core ceramic for three-unit posterior FPDs.

    PubMed

    Esquivel-Upshaw, Josephine F; Anusavice, Kenneth J; Young, Henry; Jones, Jack; Gibbs, Charles

    2004-01-01

    The purpose of this research project was to determine the clinical success rate of a lithia disilicate-based core ceramic for use in posterior fixed partial dentures (FPD) as a function of bite force, cement type, connector height, and connector width. Thirty ceramic FPD core frameworks were prepared using a heat-pressing technique and a lithia disilicate-based core ceramic. The maximum clenching force was measured for each patient prior to tooth preparation. Connector height and width were measured for each FPD. Patients were recalled yearly after cementation for 2 years and evaluated using 11 clinical criteria. All FPDs were examined by two independent clinicians, and rankings from 1 to 4 were made for each criterion (4 = excellent; 1 = unacceptable). Two of the 30 ceramic FPDs fractured within the 2-year evaluation period, representing a 93% success rate. One fracture was associated with a low occlusal force and short connector height (2.9 mm). The other fracture was associated with the greatest occlusal force (1,031 N) and adequate connector height. All criteria were ranked good to excellent during the 2-year recall for all remaining FPDs. The performance of the experimental core ceramic in posterior FPDs was promising, with only a 7% fracture rate after 2 years. Because of the limited sample size, it is not possible to identify the maximum clenching force that is allowable to prevent fracture caused by interocclusal forces.

  18. Assessment of phylogenetic sensitivity for reconstructing HIV-1 epidemiological relationships.

    PubMed

    Beloukas, Apostolos; Magiorkinis, Emmanouil; Magiorkinis, Gkikas; Zavitsanou, Asimina; Karamitros, Timokratis; Hatzakis, Angelos; Paraskevis, Dimitrios

    2012-06-01

    Phylogenetic analysis has been extensively used as a tool for the reconstruction of epidemiological relations for research or for forensic purposes. It was our objective to assess the sensitivity of different phylogenetic methods and various phylogenetic programs to reconstruct epidemiological links among HIV-1 infected patients that is the probability to reveal a true transmission relationship. Multiple datasets (90) were prepared consisting of HIV-1 sequences in protease (PR) and partial reverse transcriptase (RT) sampled from patients with documented epidemiological relationship (target population), and from unrelated individuals (control population) belonging to the same HIV-1 subtype as the target population. Each dataset varied regarding the number, the geographic origin and the transmission risk groups of the sequences among the control population. Phylogenetic trees were inferred by neighbor-joining (NJ), maximum likelihood heuristics (hML) and Bayesian methods. All clusters of sequences belonging to the target population were correctly reconstructed by NJ and Bayesian methods receiving high bootstrap and posterior probability (PP) support, respectively. On the other hand, TreePuzzle failed to reconstruct or provide significant support for several clusters; high puzzling step support was associated with the inclusion of control sequences from the same geographic area as the target population. In contrary, all clusters were correctly reconstructed by hML as implemented in PhyML 3.0 receiving high bootstrap support. We report that under the conditions of our study, hML using PhyML, NJ and Bayesian methods were the most sensitive for the reconstruction of epidemiological links mostly from sexually infected individuals. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. An Empirical Bayes Approach to Spatial Analysis

    NASA Technical Reports Server (NTRS)

    Morris, C. N.; Kostal, H.

    1983-01-01

    Multi-channel LANDSAT data are collected in several passes over agricultural areas during the growing season. How empirical Bayes modeling can be used to develop crop identification and discrimination techniques that account for spatial correlation in such data is considered. The approach models the unobservable parameters and the data separately, hoping to take advantage of the fact that the bulk of spatial correlation lies in the parameter process. The problem is then framed in terms of estimating posterior probabilities of crop types for each spatial area. Some empirical Bayes spatial estimation methods are used to estimate the logits of these probabilities.

  20. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  1. XID+: Next generation XID development

    NASA Astrophysics Data System (ADS)

    Hurley, Peter

    2017-04-01

    XID+ is a prior-based source extraction tool which carries out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. It uses a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates.

  2. Development and application of an empirical probability distribution for the prediction error of re-entry body maximum dynamic pressure

    NASA Technical Reports Server (NTRS)

    Lanzi, R. James; Vincent, Brett T.

    1993-01-01

    The relationship between actual and predicted re-entry maximum dynamic pressure is characterized using a probability density function and a cumulative distribution function derived from sounding rocket flight data. This paper explores the properties of this distribution and demonstrates applications of this data with observed sounding rocket re-entry body damage characteristics to assess probabilities of sustaining various levels of heating damage. The results from this paper effectively bridge the gap existing in sounding rocket reentry analysis between the known damage level/flight environment relationships and the predicted flight environment.

  3. To P or Not to P: Backing Bayesian Statistics.

    PubMed

    Buchinsky, Farrel J; Chadha, Neil K

    2017-12-01

    In biomedical research, it is imperative to differentiate chance variation from truth before we generalize what we see in a sample of subjects to the wider population. For decades, we have relied on null hypothesis significance testing, where we calculate P values for our data to decide whether to reject a null hypothesis. This methodology is subject to substantial misinterpretation and errant conclusions. Instead of working backward by calculating the probability of our data if the null hypothesis were true, Bayesian statistics allow us instead to work forward, calculating the probability of our hypothesis given the available data. This methodology gives us a mathematical means of incorporating our "prior probabilities" from previous study data (if any) to produce new "posterior probabilities." Bayesian statistics tell us how confidently we should believe what we believe. It is time to embrace and encourage their use in our otolaryngology research.

  4. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling

    PubMed Central

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323

  5. Intervals for posttest probabilities: a comparison of 5 methods.

    PubMed

    Mossman, D; Berger, J O

    2001-01-01

    Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.

  6. Effect of Varying Posterior Cruciate Ligament (PCL) Recessions on Kinematics and Ligament Strains with Cruciate Retaining Total Knee Prostheses.

    PubMed

    Schwarzkopf, Ran; Laster, Scott K; Cross, Michael B; Lenz, Nathaniel M

    2016-04-01

    Proper ligament tension in flexion with posterior cruciate retaining (CR) total knee arthroplasty (TKA) has long been associated with clinical success. The purpose of this study was to determine the effect of varying levels of posterior cruciate ligament (PCL) release on the tibiofemoral kinematics and PCL strain. A computational analysis was performed and varying levels of PCL release were simulated. Tibiofemoral kinematics was evaluated. The maximum PCL strain was determined for each bundle to evaluate the risk of rupture based on the failure strain. The femoral AP position shifted anteriorly as the PCL stiffness was reduced. PCL strain in both bundles increased as stiffness was reduced. The model predicts that the AL bundle should not rupture for a 75% release. Risk of PM bundle rupture is greater than AL bundle. Our findings suggest that a partial PCL release impacts tibiofemoral kinematics and ligament tension and strain. The relationship is dynamic and care should be taken when seeking optimal balance intra-operatively.

  7. Biomechanical comparison of single-row arthroscopic rotator cuff repair technique versus transosseous repair technique.

    PubMed

    Tocci, Stephen L; Tashjian, Robert Z; Leventhal, Evan; Spenciner, David B; Green, Andrew; Fleming, Braden C

    2008-01-01

    This study determined the effect of tear size on gap formation of single-row simple-suture arthroscopic rotator cuff repair (ARCR) vs transosseous Mason-Allen suture open RCR (ORCR) in 13 pairs of human cadaveric shoulders. A massive tear was created in 6 pairs and a large tear in 7. Repairs were cyclically tested in low-load and high-load conditions, with no significant difference in gap formation. Under low-load, gapping was greater in massive tears. Under high-load, there was a trend toward increased gap with ARCR for large tears. All repairs of massive tears failed in high-load. Gapping was greater posteriorly in massive tears for both techniques. Gap formation of a modeled RCR depends upon the tear size. ARCR of larger tears may have higher failure rates than ORCR, and the posterior aspect appears to be the site of maximum gapping. Specific attention should be directed toward maximizing initial fixation of larger rotator cuff tears, especially at the posterior aspect.

  8. Early versus delayed internal urethrotomy for recurrent urethral stricture after urethroplasty in children.

    PubMed

    Hosseini, Seyyed Yousef; Safarinejad, Mohammad Reza

    2005-01-01

    Our aim was to evaluate the results of early versus delayed internal urethrotomy for management of recurrent urethral strictures after posterior urethroplasty in children. Twenty boys with proven posterior urethral strictures were treated by perineal posterior urethroplasty. Of these, 12 required internal urethrotomy. Each radiograph demonstrated a patent but irregular urethra with a decrease in diameter at the point of repair (fair results). Patients were then divided into 2 groups: 6 underwent early (within 6 weeks from urethroplasty), and 6 underwent delayed (after 12 weeks from urethroplasty), internal urethrotomy with the cold knife as a complementary treatment. The groups were comparable in terms of patient age, etiology of the primary urethral stricture, number of recurrences, length and site of the actual stricture, and preoperative maximum flow rate. Mean follow-up was 5 years. Kaplan-Meier analyses showed that the stricture-free rate was 66.6% after early, and 33.3% after delayed, internal urethrotomy (P = .03). Early internal urethrotomy should be considered in boys with recurrent urethral stricture after urethroplasty.

  9. Negotiating Multicollinearity with Spike-and-Slab Priors

    PubMed Central

    Ročková, Veronika

    2014-01-01

    In multiple regression under the normal linear model, the presence of multicollinearity is well known to lead to unreliable and unstable maximum likelihood estimates. This can be particularly troublesome for the problem of variable selection where it becomes more difficult to distinguish between subset models. Here we show how adding a spike-and-slab prior mitigates this difficulty by filtering the likelihood surface into a posterior distribution that allocates the relevant likelihood information to each of the subset model modes. For identification of promising high posterior models in this setting, we consider three EM algorithms, the fast closed form EMVS version of Rockova and George (2014) and two new versions designed for variants of the spike-and-slab formulation. For a multimodal posterior under multicollinearity, we compare the regions of convergence of these three algorithms. Deterministic annealing versions of the EMVS algorithm are seen to substantially mitigate this multimodality. A single simple running example is used for illustration throughout. PMID:25419004

  10. [Dry transconjunctival sutureless 25-gauge vitrectomy in the treatment of pediatric cataract].

    PubMed

    You, Cai-yun; Xie, Li-xin

    2009-08-01

    Posterior capsule opacification is the most frequent complication of pediatric cataract surgery. To prevent posterior capsule opacification, primary phacoemulsification, posterior capsulotomy and anterior vitrectomy with intraocular lens implantation is the preferred method in the treatment of pediatric cataract. Anterior vitrectomy cutter, with 18-gauge, maximum frequency at 600/min and has simultaneous cutting, irrigation and aspiration functions, is associated with more complications and poor outcomes. In 20-gauge surgery, pars plana vitrectomy is performed with two-port sclerotomy. The irrigation increases movement of vitreous and 20-gauge sclerotomy needs suture for closing. In 25-gauge surgery, the vitreous cutter can be introduced into the vitreous cavity directly though conjunctiva and sclera. The stab incision is roughly half the size of 20-gauge cutter, therefore, the sclerotomy incision can be left unsutured. Surgery with dry transconjunctival sutureless 25-gauge vitrectomy may decrease the requirement for secondary membrane surgery and the risk for retinal detachment. The application of dry transconjunctival sutureless 25-gauge vitrectomy in the treatment of pediatric cataract is reviewed.

  11. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  12. [Isolated severe neurologic disorders in post-partum: posterior reversible encephalopathy syndrome].

    PubMed

    Wernet, A; Benayoun, L; Yver, C; Bruno, O; Mantz, J

    2007-01-01

    Just after Caesarean section for twin pregnancy and feto-pelvic dysproportion, a woman presented severe headaches and arterial hypertension, then blurred vision, then generalised seizures. There were no oedematous syndrome, proteinuria was negative, ASAT were 1.5 N and platelet count was 120,000/mm(3). Cerebral CT-scan was normal. Posterior reversible encephalopathy syndrome (PRES) was diagnosed on MRI. A second MRI performed at day 9 showed complete regression of cerebral lesions, while patient was taking anti-hypertensive and antiepileptic drugs. PRES has to be evoked in post-partum central neurological symptoms, even in absence of classical sign of pre-eclampsia, like proteinuria. PRES and eclampsia share probably common physiopathological pathways. There management and prognosis seems identical.

  13. Kinetics of badminton lunges in four directions.

    PubMed

    Hong, Youlian; Wang, Shao Jun; Lam, Wing Kai; Cheung, Jason Tak Man

    2014-02-01

    The lunge is the most fundamental skill in badminton competitions. Fifteen university-level male badminton players performed lunge maneuvers in four directions, namely, right-forward, left-forward, right-backward, and left-backward, while wearing two different brands of badminton shoes. The test compared the kinetics of badminton shoes in performing typical lunge maneuvers. A force plate and an insole measurement system measured the ground reaction forces and plantar pressures. These measurements were compared across all lunge maneuvers. The left-forward lunge generated significantly higher first vertical impact force (2.34 ± 0.52 BW) than that of the right-backward (2.06 ± 0.60 BW) and left-backward lunges (1.78 ± 0.44 BW); higher second vertical impact force (2.44 ± 0.51 BW) than that of the left-backward lunge (2.07 ± 0.38 BW); and higher maximum anterior-posterior shear force (1.48 ± 0.36 BW) than that of the left-backward lunge (1.18 ± 0.38 BW). Compared with other lunge directions, the left-forward lunge showed higher mean maximum vertical impact anterior-posterior shear forces and their respective maximum loading rates, and the plantar pressure at the total foot and heel regions. Therefore, the left-forward lunge is a critical maneuver for badminton biomechanics and related footwear research because of the high loading magnitude generated during heel impact.

  14. Diversity Dynamics in Nymphalidae Butterflies: Effect of Phylogenetic Uncertainty on Diversification Rate Shift Estimates

    PubMed Central

    Peña, Carlos; Espeland, Marianne

    2015-01-01

    The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution. PMID:25830910

  15. Diversity dynamics in Nymphalidae butterflies: effect of phylogenetic uncertainty on diversification rate shift estimates.

    PubMed

    Peña, Carlos; Espeland, Marianne

    2015-01-01

    The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.

  16. Evidence that multiple genetic variants of MC4R play a functional role in the regulation of energy expenditure and appetite in Hispanic children1234

    PubMed Central

    Cole, Shelley A; Voruganti, V Saroja; Cai, Guowen; Haack, Karin; Kent, Jack W; Blangero, John; Comuzzie, Anthony G; McPherson, John D; Gibbs, Richard A

    2010-01-01

    Background: Melanocortin-4-receptor (MC4R) haploinsufficiency is the most common form of monogenic obesity; however, the frequency of MC4R variants and their functional effects in general populations remain uncertain. Objective: The aim was to identify and characterize the effects of MC4R variants in Hispanic children. Design: MC4R was resequenced in 376 parents, and the identified single nucleotide polymorphisms (SNPs) were genotyped in 613 parents and 1016 children from the Viva la Familia cohort. Measured genotype analysis (MGA) tested associations between SNPs and phenotypes. Bayesian quantitative trait nucleotide (BQTN) analysis was used to infer the most likely functional polymorphisms influencing obesity-related traits. Results: Seven rare SNPs in coding and 18 SNPs in flanking regions of MC4R were identified. MGA showed suggestive associations between MC4R variants and body size, adiposity, glucose, insulin, leptin, ghrelin, energy expenditure, physical activity, and food intake. BQTN analysis identified SNP 1704 in a predicted micro-RNA target sequence in the downstream flanking region of MC4R as a strong, probable functional variant influencing total, sedentary, and moderate activities with posterior probabilities of 1.0. SNP 2132 was identified as a variant with a high probability (1.0) of exerting a functional effect on total energy expenditure and sleeping metabolic rate. SNP rs34114122 was selected as having likely functional effects on the appetite hormone ghrelin, with a posterior probability of 0.81. Conclusion: This comprehensive investigation provides strong evidence that MC4R genetic variants are likely to play a functional role in the regulation of weight, not only through energy intake but through energy expenditure. PMID:19889825

  17. Retinoic acid and Wnt/beta-catenin have complementary roles in anterior/posterior patterning embryos of the basal chordate amphioxus.

    PubMed

    Onai, Takayuki; Lin, Hsiu-Chin; Schubert, Michael; Koop, Demian; Osborne, Peter W; Alvarez, Susana; Alvarez, Rosana; Holland, Nicholas D; Holland, Linda Z

    2009-08-15

    A role for Wnt/beta-catenin signaling in axial patterning has been demonstrated in animals as basal as cnidarians, while roles in axial patterning for retinoic acid (RA) probably evolved in the deuterostomes and may be chordate-specific. In vertebrates, these two pathways interact both directly and indirectly. To investigate the evolutionary origins of interactions between these two pathways, we manipulated Wnt/beta-catenin and RA signaling in the basal chordate amphioxus during the gastrula stage, which is the RA-sensitive period for anterior/posterior (A/P) patterning. The results show that Wnt/beta-catenin and RA signaling have distinctly different roles in patterning the A/P axis of the amphioxus gastrula. Wnt/beta-catenin specifies the identity of the ends of the embryo (high Wnt = posterior; low Wnt = anterior) but not intervening positions. Thus, upregulation of Wnt/beta-catenin signaling induces ectopic expression of posterior markers at the anterior tip of the embryo. In contrast, RA specifies position along the A/P axis, but not the identity of the ends of the embryo-increased RA signaling strongly affects the domains of Hox expression along the A/P axis but has little or no effect on the expression of either anterior or posterior markers. Although the two pathways may both influence such things as specification of neuronal identity, interactions between them in A/P patterning appear to be minimal.

  18. Two-speed phacoemulsification for soft cataracts using optimized parameters and procedure step toolbar with the CENTURION Vision System and Balanced Tip

    PubMed Central

    Davison, James A

    2015-01-01

    Purpose To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. Patients and methods A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Results Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. Conclusion A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer. PMID:26355695

  19. Spatial distribution of vaginal closure pressures of continent and stress urinary incontinent women.

    PubMed

    Peng, Qiyu; Jones, Ruth; Shishido, Keiichi; Omata, Sadao; Constantinou, Christos E

    2007-11-01

    Clinically the strength of the contraction of the female pelvic floor is qualitatively evaluated by vaginal tactile palpation. We therefore developed a probe to enable the quantitative evaluation of the closure pressures along the vagina. Four force sensors mounted on the four orthogonal directions of an intra-vaginal probe were used to measure the vaginal pressure profile (VPP) along the vaginal wall. Clinical experiments on 23 controls and 10 patients with stress urinary incontinence (SUI) were performed using the probe to test the hypothesis that the strength of pelvic floor muscle (PFM) contractions, imposed by voluntary contraction, is related to urinary continence. The results show that VPPs, characterized in terms of pressure distribution on the anterior and posterior vaginal walls, are significantly greater than those in the left and right vaginal walls. When the PFM contracted, the positions of the maximum posterior pressures in continent females and SUI patients were 0.63+/-0.15 cm and 1.19+/-0.2 cm proximal from their peak points of anterior pressure, which are 1.52+/-0.09 cm and 1.69+/-0.13 cm proximal from the introitus of vagina, respectively. The statistical analysis shows that the maximum posterior vaginal pressures of the controls were significantly greater than those of the SUI patients both at rest (continent: 3.4+/-0.3 N cm(-2), SUI: 2.01+/-0.36 N cm(-2), p<0.05) and during PFM contraction (continent: 4.18+/-0.26 N cm(-2), SUI: 2.25+/-0.41 N cm(-2), p<0.01). In addition, the difference between the posterior and anterior vaginal walls is significantly increased when the controls contract the PFM. By contrast, there are no significant differences in the SUI group. The results show that the VPP measured by the prototype probe can be used to quantitatively evaluate the strength of the PFM, which is a clinical index for the diagnosis or assessment of female SUI.

  20. A 3D finite element model to investigate prosthetic interface stresses of different posterior tibial slope.

    PubMed

    Shen, Yi; Li, Xiaomiao; Fu, Xiaodong; Wang, Weili

    2015-11-01

    Posterior tibial slope that is created during proximal tibial resection in total knee arthroplasty has emerged as an important factor in the mechanics of the knee joint and the surgical outcome. But the ideal degree of posterior tibial slope for recovery of the knee joint function and preventions of complications remains controversial and should vary in different racial groups. The objective of this paper is to investigate the effects of posterior tibial slope on contact stresses in the tibial polyethylene component of total knee prostheses. Three-dimensional finite element analysis was used to calculate contact stresses in tibial polyethylene component of total knee prostheses subjected to a compressive load. The 3D finite element model of total knee prosthesis was constructed from the images produced by 3D scanning technology. Stresses in tibial polyethylene component were calculated with four different posterior tibial slopes (0°, 3°, 6° and 9°). The 3D finite element model of total knee prosthesis we presented was well validated. We found that the stress distribution in the polythene as evaluated by the distributions of the von Mises stress, the maximum principle stress, the minimum principle stress and the Cpress were more uniform with 3° and 6° posterior tibial slopes than with 0° and 9° posterior tibial slopes. Moreover, the peaks of the above stresses and trends of changes with increasing degree of knee flexion were more ideal with 3° and 6° posterior slopes. The results suggested that the tibial component inclination might be favourable to 7°-10° so far as the stress distribution is concerned. The range of the tibial component inclination also can decrease the wear of polyethylene. Chinese posterior tibial slope is bigger than in the West, and the current domestic use of prostheses is imported from the West, so their demands to tilt back bone cutting can lead to shorten the service life of prostheses; this experiment result is of important clinical significance, guiding orthopaedic surgeon after the best angle to cut bone.

  1. Is it necessary to use the entire root as a donor when transferring contralateral C7 nerve to repair median nerve?

    PubMed

    Gao, Kai-Ming; Lao, Jie; Guan, Wen-Jie; Hu, Jing-Jing

    2018-01-01

    If a partial contralateral C 7 nerve is transferred to a recipient injured nerve, results are not satisfactory. However, if an entire contralateral C 7 nerve is used to repair two nerves, both recipient nerves show good recovery. These findings seem contradictory, as the above two methods use the same donor nerve, only the cutting method of the contralateral C 7 nerve is different. To verify whether this can actually result in different repair effects, we divided rats with right total brachial plexus injury into three groups. In the entire root group, the entire contralateral C 7 root was transected and transferred to the median nerve of the affected limb. In the posterior division group, only the posterior division of the contralateral C 7 root was transected and transferred to the median nerve. In the entire root + posterior division group, the entire contralateral C 7 root was transected but only the posterior division was transferred to the median nerve. After neurectomy, the median nerve was repaired on the affected side in the three groups. At 8, 12, and 16 weeks postoperatively, electrophysiological examination showed that maximum amplitude, latency, muscle tetanic contraction force, and muscle fiber cross-sectional area of the flexor digitorum superficialis muscle were significantly better in the entire root and entire root + posterior division groups than in the posterior division group. No significant difference was found between the entire root and entire root + posterior division groups. Counts of myelinated axons in the median nerve were greater in the entire root group than in the entire root + posterior division group, which were greater than the posterior division group. We conclude that for the same recipient nerve, harvesting of the entire contralateral C 7 root achieved significantly better recovery than partial harvesting, even if only part of the entire root was used for transfer. This result indicates that the entire root should be used as a donor when transferring contralateral C 7 nerve.

  2. Is it necessary to use the entire root as a donor when transferring contralateral C7 nerve to repair median nerve?

    PubMed Central

    Gao, Kai-ming; Lao, Jie; Guan, Wen-jie; Hu, Jing-jing

    2018-01-01

    If a partial contralateral C7 nerve is transferred to a recipient injured nerve, results are not satisfactory. However, if an entire contralateral C7 nerve is used to repair two nerves, both recipient nerves show good recovery. These findings seem contradictory, as the above two methods use the same donor nerve, only the cutting method of the contralateral C7 nerve is different. To verify whether this can actually result in different repair effects, we divided rats with right total brachial plexus injury into three groups. In the entire root group, the entire contralateral C7 root was transected and transferred to the median nerve of the affected limb. In the posterior division group, only the posterior division of the contralateral C7 root was transected and transferred to the median nerve. In the entire root + posterior division group, the entire contralateral C7 root was transected but only the posterior division was transferred to the median nerve. After neurectomy, the median nerve was repaired on the affected side in the three groups. At 8, 12, and 16 weeks postoperatively, electrophysiological examination showed that maximum amplitude, latency, muscle tetanic contraction force, and muscle fiber cross-sectional area of the flexor digitorum superficialis muscle were significantly better in the entire root and entire root + posterior division groups than in the posterior division group. No significant difference was found between the entire root and entire root + posterior division groups. Counts of myelinated axons in the median nerve were greater in the entire root group than in the entire root + posterior division group, which were greater than the posterior division group. We conclude that for the same recipient nerve, harvesting of the entire contralateral C7 root achieved significantly better recovery than partial harvesting, even if only part of the entire root was used for transfer. This result indicates that the entire root should be used as a donor when transferring contralateral C7 nerve. PMID:29451212

  3. Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans

    PubMed Central

    Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude

    2013-01-01

    Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies. PMID:24302894

  4. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  5. Estimating population size for Capercaillie (Tetrao urogallus L.) with spatial capture-recapture models based on genotypes from one field sample

    USGS Publications Warehouse

    Mollet, Pierre; Kery, Marc; Gardner, Beth; Pasinelli, Gilberto; Royle, Andy

    2015-01-01

    We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR) model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex). The SCR model yielded atotal population size estimate (posterior mean) of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130–147). The observed sex ratio was skewed towards males (0.63). The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54–0.61), suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.

  6. Dimension-independent likelihood-informed MCMC

    DOE PAGES

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less

  7. Bayesian Travel Time Inversion adopting Gaussian Process Regression

    NASA Astrophysics Data System (ADS)

    Mauerberger, S.; Holschneider, M.

    2017-12-01

    A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.

  8. A Bayesian model averaging approach for estimating the relative risk of mortality associated with heat waves in 105 U.S. cities.

    PubMed

    Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D

    2011-12-01

    Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.

  9. Characteristics of Chinese-English bilingual dyslexia in right occipito-temporal lesion.

    PubMed

    Ting, Simon Kang Seng; Chia, Pei Shi; Chan, Yiong Huak; Kwek, Kevin Jun Hong; Tan, Wilnard; Hameed, Shahul; Tan, Eng-King

    2017-11-01

    Current literature suggests that right hemisphere lesions produce predominant spatial-related dyslexic error in English speakers. However, little is known regarding such lesions in Chinese speakers. In this paper, we describe the dyslexic characteristics of a Chinese-English bilingual patient with a right posterior cortical lesion. He was found to have profound spatial-related errors during his English word reading, in both real and non-words. During Chinese word reading, there was significantly less error compared to English, probably due to the ideographic nature of the Chinese language. He was also found to commit phonological-like visual errors in English, characterized by error responses that were visually similar to the actual word. There was no significant difference in visual errors during English word reading compared with Chinese. In general, our patient's performance in both languages appears to be consistent with the current literature on right posterior hemisphere lesions. Additionally, his performance also likely suggests that the right posterior cortical region participates in the visual analysis of orthographical word representation, both in ideographical and alphabetic languages, at least from a bilingual perspective. Future studies should further examine the role of the right posterior region in initial visual analysis of both languages. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Silicone intraocular lens surface calcification in a patient with asteroid hyalosis.

    PubMed

    Matsumura, Kazuhiro; Takano, Masahiko; Shimizu, Kimiya; Nemoto, Noriko

    2012-07-01

    To confirm a substance presence on the posterior intraocular lens (IOL) surface in a patient with asteroid hyalosis. An 80-year-old man had IOLs for approximately 12 years. Opacities and neodymium-doped yttrium aluminum garnet pits were observed on the posterior surface of the right IOL. Asteroid hyalosis and an epiretinal membrane were observed OD. An IOL exchange was performed on 24 March 2008, and the explanted IOL was analyzed using a light microscope and a transmission electron microscope with a scanning electron micrograph and an energy-dispersive X-ray spectrometer for elemental analysis. To confirm asteroid hyalosis, asteroid bodies were examined with the ionic liquid (EtMeIm+ BF4-) method using a field emission scanning electron microscope (FE-SEM) with digital beam control RGB mapping. X-ray spectrometry of the deposits revealed high calcium and phosphorus peaks. Spectrometry revealed that the posterior IOL surface opacity was due to a calcium-phosphorus compound. Examination of the asteroid bodies using FE-SEM with digital beam control RGB mapping confirmed calcium and phosphorus as the main components. Calcium hydrogen phosphate dihydrate deposits were probably responsible for the posterior IOL surface opacity. Furthermore, analysis of the asteroid bodies demonstrated that calcium and phosphorus were its main components.

  11. A bayesian analysis for identifying DNA copy number variations using a compound poisson process.

    PubMed

    Chen, Jie; Yiğiter, Ayten; Wang, Yu-Ping; Deng, Hong-Wen

    2010-01-01

    To study chromosomal aberrations that may lead to cancer formation or genetic diseases, the array-based Comparative Genomic Hybridization (aCGH) technique is often used for detecting DNA copy number variants (CNVs). Various methods have been developed for gaining CNVs information based on aCGH data. However, most of these methods make use of the log-intensity ratios in aCGH data without taking advantage of other information such as the DNA probe (e.g., biomarker) positions/distances contained in the data. Motivated by the specific features of aCGH data, we developed a novel method that takes into account the estimation of a change point or locus of the CNV in aCGH data with its associated biomarker position on the chromosome using a compound Poisson process. We used a Bayesian approach to derive the posterior probability for the estimation of the CNV locus. To detect loci of multiple CNVs in the data, a sliding window process combined with our derived Bayesian posterior probability was proposed. To evaluate the performance of the method in the estimation of the CNV locus, we first performed simulation studies. Finally, we applied our approach to real data from aCGH experiments, demonstrating its applicability.

  12. Bayesian methods for outliers detection in GNSS time series

    NASA Astrophysics Data System (ADS)

    Qianqian, Zhang; Qingming, Gui

    2013-07-01

    This article is concerned with the problem of detecting outliers in GNSS time series based on Bayesian statistical theory. Firstly, a new model is proposed to simultaneously detect different types of outliers based on the conception of introducing different types of classification variables corresponding to the different types of outliers; the problem of outlier detection is converted into the computation of the corresponding posterior probabilities, and the algorithm for computing the posterior probabilities based on standard Gibbs sampler is designed. Secondly, we analyze the reasons of masking and swamping about detecting patches of additive outliers intensively; an unmasking Bayesian method for detecting additive outlier patches is proposed based on an adaptive Gibbs sampler. Thirdly, the correctness of the theories and methods proposed above is illustrated by simulated data and then by analyzing real GNSS observations, such as cycle slips detection in carrier phase data. Examples illustrate that the Bayesian methods for outliers detection in GNSS time series proposed by this paper are not only capable of detecting isolated outliers but also capable of detecting additive outlier patches. Furthermore, it can be successfully used to process cycle slips in phase data, which solves the problem of small cycle slips.

  13. Probabilistic graphs as a conceptual and computational tool in hydrology and water management

    NASA Astrophysics Data System (ADS)

    Schoups, Gerrit

    2014-05-01

    Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.

  14. Nonlinear detection for a high rate extended binary phase shift keying system.

    PubMed

    Chen, Xian-Qing; Wu, Le-Nan

    2013-03-28

    The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding.

  15. Nonlinear Detection for a High Rate Extended Binary Phase Shift Keying System

    PubMed Central

    Chen, Xian-Qing; Wu, Le-Nan

    2013-01-01

    The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding. PMID:23539034

  16. Relevance Vector Machine Learning for Neonate Pain Intensity Assessment Using Digital Imaging

    PubMed Central

    Gholami, Behnood; Tannenbaum, Allen R.

    2011-01-01

    Pain assessment in patients who are unable to verbally communicate is a challenging problem. The fundamental limitations in pain assessment in neonates stem from subjective assessment criteria, rather than quantifiable and measurable data. This often results in poor quality and inconsistent treatment of patient pain management. Recent advancements in pattern recognition techniques using relevance vector machine (RVM) learning techniques can assist medical staff in assessing pain by constantly monitoring the patient and providing the clinician with quantifiable data for pain management. The RVM classification technique is a Bayesian extension of the support vector machine (SVM) algorithm, which achieves comparable performance to SVM while providing posterior probabilities for class memberships and a sparser model. If classes represent “pure” facial expressions (i.e., extreme expressions that an observer can identify with a high degree of confidence), then the posterior probability of the membership of some intermediate facial expression to a class can provide an estimate of the intensity of such an expression. In this paper, we use the RVM classification technique to distinguish pain from nonpain in neonates as well as assess their pain intensity levels. We also correlate our results with the pain intensity assessed by expert and nonexpert human examiners. PMID:20172803

  17. On the Empirical Importance of the Conditional Skewness Assumption in Modelling the Relationship between Risk and Return

    NASA Astrophysics Data System (ADS)

    Pipień, M.

    2008-09-01

    We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.

  18. The hippocampal longitudinal axis-relevance for underlying tau and TDP-43 pathology.

    PubMed

    Lladó, Albert; Tort-Merino, Adrià; Sánchez-Valle, Raquel; Falgàs, Neus; Balasa, Mircea; Bosch, Beatriz; Castellví, Magda; Olives, Jaume; Antonell, Anna; Hornberger, Michael

    2018-06-01

    Recent studies suggest that hippocampus has different cortical connectivity and functionality along its longitudinal axis. We sought to elucidate the possible different pattern of atrophy in longitudinal axis of hippocampus between Amyloid/Tau pathology and TDP-43-pathies. Seventy-three presenile subjects were included: Amyloid/Tau group (33 Alzheimer's disease with confirmed cerebrospinal fluid [CSF] biomarkers), probable TDP-43 group (7 semantic variant progressive primary aphasia, 5 GRN and 2 C9orf72 mutation carriers) and 26 healthy controls. We conducted a region-of-interest voxel-based morphometry analysis on the hippocampal longitudinal axis, by contrasting the groups, covarying with CSF biomarkers (Aβ 42 , total tau, p-tau) and covarying with episodic memory scores. Amyloid/Tau pathology affected mainly posterior hippocampus while anterior left hippocampus was more atrophied in probable TDP-43-pathies. We also observed a significant correlation of posterior hippocampal atrophy with Alzheimer's disease CSF biomarkers and visual memory scores. Taken together, these data suggest that there is a potential differentiation along the hippocampal longitudinal axis based on the underlying pathology, which could be used as a potential biomarker to identify the underlying pathology in different neurodegenerative diseases. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Lingual Propulsive Pressures across Consistencies Generated by the Anteromedian and Posteromedian Tongue by Healthy Young Adults

    ERIC Educational Resources Information Center

    Gingrich, Laura L.; Stierwalt, Julie A. G.; Hageman, Carlin F.; LaPointe, Leonard L.

    2012-01-01

    Purpose: In the present study, the authors investigated lingual propulsive pressures generated in the normal swallow by the anterior and posterior lingual segments for various consistencies and maximum isometric tasks. Method: Lingual pressures for saliva, thin, and honey-thick liquid boluses were measured via the Iowa Oral Performance Instrument…

  20. Exoplanet Biosignatures: A Framework for Their Assessment.

    PubMed

    Catling, David C; Krissansen-Totton, Joshua; Kiang, Nancy Y; Crisp, David; Robinson, Tyler D; DasSarma, Shiladitya; Rushby, Andrew J; Del Genio, Anthony; Bains, William; Domagal-Goldman, Shawn

    2018-04-20

    Finding life on exoplanets from telescopic observations is an ultimate goal of exoplanet science. Life produces gases and other substances, such as pigments, which can have distinct spectral or photometric signatures. Whether or not life is found with future data must be expressed with probabilities, requiring a framework of biosignature assessment. We present a framework in which we advocate using biogeochemical "Exo-Earth System" models to simulate potential biosignatures in spectra or photometry. Given actual observations, simulations are used to find the Bayesian likelihoods of those data occurring for scenarios with and without life. The latter includes "false positives" wherein abiotic sources mimic biosignatures. Prior knowledge of factors influencing planetary inhabitation, including previous observations, is combined with the likelihoods to give the Bayesian posterior probability of life existing on a given exoplanet. Four components of observation and analysis are necessary. (1) Characterization of stellar (e.g., age and spectrum) and exoplanetary system properties, including "external" exoplanet parameters (e.g., mass and radius), to determine an exoplanet's suitability for life. (2) Characterization of "internal" exoplanet parameters (e.g., climate) to evaluate habitability. (3) Assessment of potential biosignatures within the environmental context (components 1-2), including corroborating evidence. (4) Exclusion of false positives. We propose that resulting posterior Bayesian probabilities of life's existence map to five confidence levels, ranging from "very likely" (90-100%) to "very unlikely" (<10%) inhabited. Key Words: Bayesian statistics-Biosignatures-Drake equation-Exoplanets-Habitability-Planetary science. Astrobiology 18, xxx-xxx.

  1. Clinical variables associated with recovery in patients with chronic tension-type headache after treatment with manual therapy.

    PubMed

    Castien, René F; van der Windt, Daniëlle A W M; Blankenstein, Annette H; Heymans, Martijn W; Dekker, Joost

    2012-04-01

    The aims of this study were to describe the course of chronic tension-type headache (CTTH) in participants receiving manual therapy (MT), and to develop a prognostic model for predicting recovery in participants receiving MT. Outcomes in 145 adults with CTTH who received MT as participants in a previously published randomised clinical trial (n=41) or in a prospective cohort study (n=104) were evaluated. Assessments were made at baseline and at 8 and 26 weeks of follow-up. Recovery was defined as a 50% reduction in headache days in combination with a score of 'much improved' or 'very much improved' for global perceived improvement. Potential prognostic factors were analyzed by univariable and multivariable regression analysis. After 8 weeks 78% of the participants reported recovery after MT, and after 26 weeks the frequency of recovered participants was 73%. Prognostic factors related to recovery were co-existing migraine, absence of multiple-site pain, greater cervical range of motion and higher headache intensity. In participants classified as being likely to be recovered, the posterior probability for recovery at 8 weeks was 92%, whereas for those being classified at low probability of recovery this posterior probability was 61%. It is concluded that the course of CTTH is favourable in primary care patients receiving MT. The prognostic models provide additional information to improve prediction of outcome. Copyright © 2012 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  2. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  3. Scaling behavior for random walks with memory of the largest distance from the origin

    NASA Astrophysics Data System (ADS)

    Serva, Maurizio

    2013-11-01

    We study a one-dimensional random walk with memory. The behavior of the walker is modified with respect to the simple symmetric random walk only when he or she is at the maximum distance ever reached from his or her starting point (home). In this case, having the choice to move farther or to move closer, the walker decides with different probabilities. If the probability of a forward step is higher then the probability of a backward step, the walker is bold, otherwise he or she is timorous. We investigate the asymptotic properties of this bold-timorous random walk, showing that the scaling behavior varies continuously from subdiffusive (timorous) to superdiffusive (bold). The scaling exponents are fully determined with a new mathematical approach based on a decomposition of the dynamics in active journeys (the walker is at the maximum distance) and lazy journeys (the walker is not at the maximum distance).

  4. Flood Frequency Curves - Use of information on the likelihood of extreme floods

    NASA Astrophysics Data System (ADS)

    Faber, B.

    2011-12-01

    Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.

  5. [Hygienic estimation of combined influence of noise and infrasound on the organism of military men].

    PubMed

    Akhmetzianov, I M; Zinkin, V N; Petreev, I V; Dragan, S P

    2011-11-01

    Hygienic estimation of combined influence of noise and infrasound on the organism of military men. Combined influence of noise and infrasound is accompanied by essential increase of risk of development neurosensory deafness and hypertensive illness. At combined influence of noise and infrasound with a maximum of a spectrum in the field of a sound range the probability of development neurosensory deafness will prevail. Thus probability of development of pathology of ear above the values established ISO 1999:1990. In a case if the spectrum maximum is necessary on an infrasonic range the probability of development of a hypertensive illness.

  6. Vitreous cinematography in the study of vitreoretinal diseases.

    PubMed

    Trempe, C L; Takahashi, M; Freeman, H M

    1981-07-01

    A new technique of vitreous cinematography involves scanning of the vitreous cavity using optical sections to provide objective, reproducible information on the dynamics of the posterior vitreous and vitreoretinal relationships. Using a newly developed preset lens (El Bayadi-Kajiura lens), this technique makes it possible to document an entire optical section of the posterior vitreous. This is done by mechanically displacing the vitreous so that maximum reflectivity can be obtained from the vitreous gel. This article describes the technique and presents clinical examples documenting complete and incomplete vitreous detachment in normal eyes, Cloquet's canal associated with an optic disc pit, vitreous traction associated with a lamellar hole in an area of preretinal macular fibrosis, and vitreous traction at the anterior flap of a retinal break.

  7. Influence of maneuverability on helicopter combat effectiveness

    NASA Technical Reports Server (NTRS)

    Falco, M.; Smith, R.

    1982-01-01

    A computational procedure employing a stochastic learning method in conjunction with dynamic simulation of helicopter flight and weapon system operation was used to derive helicopter maneuvering strategies. The derived strategies maximize either survival or kill probability and are in the form of a feedback control based upon threat visual or warning system cues. Maneuverability parameters implicit in the strategy development include maximum longitudinal acceleration and deceleration, maximum sustained and transient load factor turn rate at forward speed, and maximum pedal turn rate and lateral acceleration at hover. Results are presented in terms of probability of skill for all combat initial conditions for two threat categories.

  8. State-space modeling to support management of brucellosis in the Yellowstone bison population

    USGS Publications Warehouse

    Hobbs, N. Thompson; Geremia, Chris; Treanor, John; Wallen, Rick; White, P.J.; Hooten, Mevin B.; Rhyan, Jack C.

    2015-01-01

    The bison (Bison bison) of the Yellowstone ecosystem, USA, exemplify the difficulty of conserving large mammals that migrate across the boundaries of conservation areas. Bison are infected with brucellosis (Brucella abortus) and their seasonal movements can expose livestock to infection. Yellowstone National Park has embarked on a program of adaptive management of bison, which requires a model that assimilates data to support management decisions. We constructed a Bayesian state-space model to reveal the influence of brucellosis on the Yellowstone bison population. A frequency-dependent model of brucellosis transmission was superior to a density-dependent model in predicting out-of-sample observations of horizontal transmission probability. A mixture model including both transmission mechanisms converged on frequency dependence. Conditional on the frequency-dependent model, brucellosis median transmission rate was 1.87 yr−1. The median of the posterior distribution of the basic reproductive ratio (R0) was 1.75. Seroprevalence of adult females varied around 60% over two decades, but only 9.6 of 100 adult females were infectious. Brucellosis depressed recruitment; estimated population growth rate λ averaged 1.07 for an infected population and 1.11 for a healthy population. We used five-year forecasting to evaluate the ability of different actions to meet management goals relative to no action. Annually removing 200 seropositive female bison increased by 30-fold the probability of reducing seroprevalence below 40% and increased by a factor of 120 the probability of achieving a 50% reduction in transmission probability relative to no action. Annually vaccinating 200 seronegative animals increased the likelihood of a 50% reduction in transmission probability by fivefold over no action. However, including uncertainty in the ability to implement management by representing stochastic variation in the number of accessible bison dramatically reduced the probability of achieving goals using interventions relative to no action. Because the width of the posterior predictive distributions of future population states expands rapidly with increases in the forecast horizon, managers must accept high levels of uncertainty. These findings emphasize the necessity of iterative, adaptive management with relatively short-term commitment to action and frequent reevaluation in response to new data and model forecasts. We believe our approach has broad applications.

  9. Theoretical axial wall angulation for rotational resistance form in an experimental-fixed partial denture

    PubMed Central

    2017-01-01

    PURPOSE The aim of this study was to determine the influence of long base lengths of a fixed partial denture (FPD) to rotational resistance with variation of vertical wall angulation. MATERIALS AND METHODS Trigonometric calculations were done to determine the maximum wall angle needed to resist rotational displacement of an experimental-FPD model in 2-dimensional plane. The maximum wall angle calculation determines the greatest taper that resists rotation. Two different axes of rotation were used to test this model with five vertical abutment heights of 3-, 3.5-, 4-, 4.5-, and 5-mm. The two rotational axes were located on the mesial-side of the anterior abutment and the distal-side of the posterior abutment. Rotation of the FPD around the anterior axis was counter-clockwise, Posterior-Anterior (P-A) and clockwise, Anterior-Posterior (A-P) around the distal axis in the sagittal plane. RESULTS Low levels of vertical wall taper, ≤ 10-degrees, were needed to resist rotational displacement in all wall height categories; 2–to–6–degrees is generally considered ideal, with 7–to–10–degrees as favorable to the long axis of the abutment. Rotation around both axes demonstrated that two axial walls of the FPD resisted rotational displacement in each direction. In addition, uneven abutment height combinations required the lowest wall angulations to achieve resistance in this study. CONCLUSION The vertical height and angulation of FPD abutments, two rotational axes, and the long base lengths all play a role in FPD resistance form. PMID:28874995

  10. Myocardium Segmentation From DE MRI Using Multicomponent Gaussian Mixture Model and Coupled Level Set.

    PubMed

    Liu, Jie; Zhuang, Xiahai; Wu, Lianming; An, Dongaolei; Xu, Jianrong; Peters, Terry; Gu, Lixu

    2017-11-01

    Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients. Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients.

  11. Maximum parsimony, substitution model, and probability phylogenetic trees.

    PubMed

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  12. A Fast and Scalable Method for A-Optimal Design of Experiments for Infinite-dimensional Bayesian Nonlinear Inverse Problems with Application to Porous Medium Flow

    NASA Astrophysics Data System (ADS)

    Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.

    2015-12-01

    We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.

  13. Vertical augmentation of the posterior atrophic mandible by interpositional grafts in a split-mouth design: a human tomography evaluation pilot study.

    PubMed

    Domingues, Eduardo Pinheiro; Ribeiro, Rafael Fernandes; Horta, Martinho Campolina Rebello; Manzi, Flávio Ricardo; Côsso, Maurício Greco; Zenóbio, Elton Gonçalves

    2017-10-01

    Using computed tomography, to compare vertical and volumetric bone augmentation after interposition grafting with bovine bone mineral matrix (GEISTLICH BIO-OSS ® ) or hydroxyapatite/tricalcium phosphate (STRAUMANN ® BONECERAMIC) for atrophic posterior mandible reconstruction through segmental osteotomy. Seven patients received interposition grafts in the posterior mandible for implant rehabilitation. The computed tomography cone beam images were analysed with OsiriX Imaging Software 6.5 (Pixmeo Geneva, Switzerland) in the pre-surgical period (T0), at 15 days post-surgery (T1) and at 180 days post-surgery (T2). The tomographic analysis was performed by a single trained and calibrated radiologist. Descriptive statistics and nonparametric methods were used to analyse the data. There was a significant difference in vertical and volume augmentation with both biomaterials using the technique (P < 0.05). There were no significant differences (P > 0.05) in volume change of the graft, bone volume augmentation, or augmentation of the maximum linear vertical distance between the two analysed biomaterials. The GEISTLICH BIO-OSS ® and STRAUMANN ® BONECERAMIC interposition grafts exhibited similar and sufficient dimensional stability and volume gain for short implants in the atrophic posterior mandible. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Nonmuscle myosin IIA and IIB differentially contribute to intrinsic and directed migration of human embryonic lung fibroblasts.

    PubMed

    Kuragano, Masahiro; Murakami, Yota; Takahashi, Masayuki

    2018-03-25

    Nonmuscle myosin II (NMII) plays an essential role in directional cell migration. In this study, we investigated the roles of NMII isoforms (NMIIA and NMIIB) in the migration of human embryonic lung fibroblasts, which exhibit directionally persistent migration in an intrinsic manner. NMIIA-knockdown (KD) cells migrated unsteadily, but their direction of migration was approximately maintained. By contrast, NMIIB-KD cells occasionally reversed their direction of migration. Lamellipodium-like protrusions formed in the posterior region of NMIIB-KD cells prior to reversal of the migration direction. Moreover, NMIIB KD led to elongation of the posterior region in migrating cells, probably due to the lack of load-bearing stress fibers in this area. These results suggest that NMIIA plays a role in steering migration by maintaining stable protrusions in the anterior region, whereas NMIIB plays a role in maintenance of front-rear polarity by preventing aberrant protrusion formation in the posterior region. These distinct functions of NMIIA and NMIIB might promote intrinsic and directed migration of normal human fibroblasts. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Inference with minimal Gibbs free energy in information field theory.

    PubMed

    Ensslin, Torsten A; Weig, Cornelius

    2010-11-01

    Non-linear and non-gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a gaussian signal with unknown spectrum, and (iii) inference of a poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-gaussian posterior.

  16. 14 CFR 27.801 - Ditching.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... external doors and windows are accounted for in the investigation of the probable behavior of the... and windows must be designed to withstand the probable maximum local pressures. [Amdt. 27-11, 41 FR...

  17. 14 CFR 29.801 - Ditching.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... external doors and windows are accounted for in the investigation of the probable behavior of the... and windows must be designed to withstand the probable maximum local pressures. [Amdt. 29-12, 41 FR...

  18. Measurement of the Errors of Service Altimeter Installations During Landing-Approach and Take-Off Operations

    NASA Technical Reports Server (NTRS)

    Gracey, William; Jewel, Joseph W., Jr.; Carpenter, Gene T.

    1960-01-01

    The overall errors of the service altimeter installations of a variety of civil transport, military, and general-aviation airplanes have been experimentally determined during normal landing-approach and take-off operations. The average height above the runway at which the data were obtained was about 280 feet for the landings and about 440 feet for the take-offs. An analysis of the data obtained from 196 airplanes during 415 landing approaches and from 70 airplanes during 152 take-offs showed that: 1. The overall error of the altimeter installations in the landing- approach condition had a probable value (50 percent probability) of +/- 36 feet and a maximum probable value (99.7 percent probability) of +/- 159 feet with a bias of +10 feet. 2. The overall error in the take-off condition had a probable value of +/- 47 feet and a maximum probable value of +/- 207 feet with a bias of -33 feet. 3. The overall errors of the military airplanes were generally larger than those of the civil transports in both the landing-approach and take-off conditions. In the landing-approach condition the probable error and the maximum probable error of the military airplanes were +/- 43 and +/- 189 feet, respectively, with a bias of +15 feet, whereas those for the civil transports were +/- 22 and +/- 96 feet, respectively, with a bias of +1 foot. 4. The bias values of the error distributions (+10 feet for the landings and -33 feet for the take-offs) appear to represent a measure of the hysteresis characteristics (after effect and recovery) and friction of the instrument and the pressure lag of the tubing-instrument system.

  19. Pippi — Painless parsing, post-processing and plotting of posterior and likelihood samples

    NASA Astrophysics Data System (ADS)

    Scott, Pat

    2012-11-01

    Interpreting samples from likelihood or posterior probability density functions is rarely as straightforward as it seems it should be. Producing publication-quality graphics of these distributions is often similarly painful. In this short note I describe pippi, a simple, publicly available package for parsing and post-processing such samples, as well as generating high-quality PDF graphics of the results. Pippi is easily and extensively configurable and customisable, both in its options for parsing and post-processing samples, and in the visual aspects of the figures it produces. I illustrate some of these using an existing supersymmetric global fit, performed in the context of a gamma-ray search for dark matter. Pippi can be downloaded and followed at http://github.com/patscott/pippi.

  20. Determining X-ray source intensity and confidence bounds in crowded fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu

    We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less

  1. Feature selection for elderly faller classification based on wearable sensors.

    PubMed

    Howcroft, Jennifer; Kofman, Jonathan; Lemaire, Edward D

    2017-05-30

    Wearable sensors can be used to derive numerous gait pattern features for elderly fall risk and faller classification; however, an appropriate feature set is required to avoid high computational costs and the inclusion of irrelevant features. The objectives of this study were to identify and evaluate smaller feature sets for faller classification from large feature sets derived from wearable accelerometer and pressure-sensing insole gait data. A convenience sample of 100 older adults (75.5 ± 6.7 years; 76 non-fallers, 24 fallers based on 6 month retrospective fall occurrence) walked 7.62 m while wearing pressure-sensing insoles and tri-axial accelerometers at the head, pelvis, left and right shanks. Feature selection was performed using correlation-based feature selection (CFS), fast correlation based filter (FCBF), and Relief-F algorithms. Faller classification was performed using multi-layer perceptron neural network, naïve Bayesian, and support vector machine classifiers, with 75:25 single stratified holdout and repeated random sampling. The best performing model was a support vector machine with 78% accuracy, 26% sensitivity, 95% specificity, 0.36 F1 score, and 0.31 MCC and one posterior pelvis accelerometer input feature (left acceleration standard deviation). The second best model achieved better sensitivity (44%) and used a support vector machine with 74% accuracy, 83% specificity, 0.44 F1 score, and 0.29 MCC. This model had ten input features: maximum, mean and standard deviation posterior acceleration; maximum, mean and standard deviation anterior acceleration; mean superior acceleration; and three impulse features. The best multi-sensor model sensitivity (56%) was achieved using posterior pelvis and both shank accelerometers and a naïve Bayesian classifier. The best single-sensor model sensitivity (41%) was achieved using the posterior pelvis accelerometer and a naïve Bayesian classifier. Feature selection provided models with smaller feature sets and improved faller classification compared to faller classification without feature selection. CFS and FCBF provided the best feature subset (one posterior pelvis accelerometer feature) for faller classification. However, better sensitivity was achieved by the second best model based on a Relief-F feature subset with three pressure-sensing insole features and seven head accelerometer features. Feature selection should be considered as an important step in faller classification using wearable sensors.

  2. Posterior cricoarytenoid muscle electrophysiologic changes are predictive of vocal cord paralysis with recurrent laryngeal nerve compressive injury in a canine model.

    PubMed

    Puram, Sidharth V; Chow, Harold; Wu, Che-Wei; Heaton, James T; Kamani, Dipti; Gorti, Gautham; Chiang, Feng Yu; Dionigi, Gianlorenzo; Barczynski, Marcin; Schneider, Rick; Dralle, Henning; Lorenz, Kerstin; Randolph, Gregory W

    2016-12-01

    Injury to the recurrent laryngeal nerve (RLN) is a dreaded complication of endocrine surgery. Intraoperative neural monitoring (IONM) has been increasingly utilized to assess the functional status of the RLN. Although the posterior cricoarytenoid muscle (PCA) is innervated by the RLN as the abductor of the larynx, PCA electromyography (EMG) is infrequently recorded during IONM and PCA activity after RLN compressive injury remains poorly characterized. Single-subject prospective animal study. We employed a canine model to identify postcricoid EMG correlates of postoperative vocal cord paralysis (VCP). Postcricoid electrode recordings were obtained before and after compressive RLN injury associated with VCP. Normative postcricoid recordings revealed mean amplitude of 1288 microvolt (μV) and latency of 8.2 millisecond (ms) with maximum (1 milliamp [mA]) vagal stimulation, and mean amplitude of 1807 μV and latency of 3.5 ms with maximum (1 mA) RLN stimulation. Following injury that was associated with VCP, there was 62.1% decrement in postcricoid EMG amplitude with maximum vagal stimulation and 80% decrement with maximum RLN stimulation. Threshold stimulation of the vagus increased by 23%, and there was a corresponding 42% decrease in amplitude. For RLN stimulation, latency increased by 17.3% following injury, whereas threshold stimulation increased by 61% with 35.5% decrement in EMG amplitude. Thus, if RLN amplitude decreases by ≥ 80%, with absolute amplitude of ≤ 300 μV or less and latency increase of ≥ 10%, RLN injury is likely associated with VCP. Our results predict postoperative VCP based on postcricoid electromyographic IONM and may guide surgical decision making. NA Laryngoscope, 126:2744-2751, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  3. Estimation of Posterior Probabilities Using Multivariate Smoothing Splines and Generalized Cross-Validation.

    DTIC Science & Technology

    1983-09-01

    Ciencia y Tecnologia -Mexico, by ONR under Contract No. N00014-77-C-0675, and by ARO under Contract No. DAAG29-80-K-0042. LUJ THE VIE~W, rTIJ. ’~v ’’~c...Department of Statis- tics. For financial support I thank the Consejo Nacional de Ciencia y Tecnologia - Mexico, and the Department of Statistics of the

  4. Maximum a posteriori classification of multifrequency, multilook, synthetic aperture radar intensity data

    NASA Technical Reports Server (NTRS)

    Rignot, E.; Chellappa, R.

    1993-01-01

    We present a maximum a posteriori (MAP) classifier for classifying multifrequency, multilook, single polarization SAR intensity data into regions or ensembles of pixels of homogeneous and similar radar backscatter characteristics. A model for the prior joint distribution of the multifrequency SAR intensity data is combined with a Markov random field for representing the interactions between region labels to obtain an expression for the posterior distribution of the region labels given the multifrequency SAR observations. The maximization of the posterior distribution yields Bayes's optimum region labeling or classification of the SAR data or its MAP estimate. The performance of the MAP classifier is evaluated by using computer-simulated multilook SAR intensity data as a function of the parameters in the classification process. Multilook SAR intensity data are shown to yield higher classification accuracies than one-look SAR complex amplitude data. The MAP classifier is extended to the case in which the radar backscatter from the remotely sensed surface varies within the SAR image because of incidence angle effects. The results obtained illustrate the practicality of the method for combining SAR intensity observations acquired at two different frequencies and for improving classification accuracy of SAR data.

  5. Efficacy of texture, shape, and intensity features for robust posterior-fossa tumor segmentation in MRI

    NASA Astrophysics Data System (ADS)

    Ahmed, S.; Iftekharuddin, K. M.; Ogg, R. J.; Laningham, F. H.

    2009-02-01

    Our previous works suggest that fractal-based texture features are very useful for detection, segmentation and classification of posterior-fossa (PF) pediatric brain tumor in multimodality MRI. In this work, we investigate and compare efficacy of our texture features such as fractal and multifractional Brownian motion (mBm), and intensity along with another useful level-set based shape feature in PF tumor segmentation. We study feature selection and ranking using Kullback -Leibler Divergence (KLD) and subsequent tumor segmentation; all in an integrated Expectation Maximization (EM) framework. We study the efficacy of all four features in both multimodality as well as disparate MRI modalities such as T1, T2 and FLAIR. Both KLD feature plots and information theoretic entropy measure suggest that mBm feature offers the maximum separation between tumor and non-tumor tissues in T1 and FLAIR MRI modalities. The same metrics show that intensity feature offers the maximum separation between tumor and non-tumor tissue in T2 MRI modality. The efficacies of these features are further validated in segmenting PF tumor using both single modality and multimodality MRI for six pediatric patients with over 520 real MR images.

  6. The effects of dorso-lumbar motion restriction on the ground reaction force components during running.

    PubMed

    Morley, Joseph J; Traum, Edward

    2016-04-01

    The effects of restricting dorso-lumbar spine mobility on ground reaction forces in runners was measured and assessed. A semi-rigid cast was used to restrict spinal motion during running. Subjects ran across a force platform at 3.6 m/s, planting the right foot on the platform. Data was collected from ten running trials with the cast and ten without the cast and analysed. Casted running showed that the initial vertical heel strike maximum was increased (p < .02) and that the anterior-posterior deceleration impulse was increased (p < .01). The maximum vertical ground reaction force was decreased in casted running (p < .01), as was the anterior-posterior acceleration impulse (p < .02). There was a trend for increased medial-lateral impulse in the uncasted state, but this was not statistically significant. Spinal mobility and fascia contribute to load transfer between joints and body segments. Experimentally restricting spinal motion during running results in measurable and repeatable alterations in ground reaction force components. Alterations in load transfer due to decreased spinal motion may be a factor contributing to selected injuries in runners. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A comparison of ballet dancers with different level of experience in performing single-leg stance on retiré position.

    PubMed

    Lin, Chia-Wei; Lin, Cheng-Feng; Hsue, Bih-Jen; Su, Fong-Chin

    2014-04-01

    The purpose of the current study was to evaluate the postural stability of single-leg standing on the retiré position in ballet dancers having three different levels of skill. Nine superior experienced female ballet dancers, 9 experienced, and 12 novice dancers performed single-leg standing in the retiré position. The parameters of center of pressure (COP) in the anterior-posterior and medial-lateral directions and the maximum distance between COP and the center of mass (COM) were measured. The inclination angles of body segments (head, torso, and supporting leg) in the frontal plane were also calculated. The findings showed that the novice dancers had a trend of greater torso inclination angles than the experienced dancers but that the superior experienced dancers had greater maximum COM-COP distance in the anterior-posterior direction. Furthermore, both experienced and novice dancers had better balance when standing on the nondominant leg, whereas the superior experienced dancers had similar postural stability between legs. Based on the findings, ballet training should put equal focus on both legs and frontal plane control (medial-lateral direction) should be integrated to ballet training program.

  8. Finite element model updating using the shadow hybrid Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  9. Scanning electron microscopy of the trabecular meshwork: understanding the pathogenesis of primary angle closure glaucoma.

    PubMed

    Sihota, Ramanjit; Goyal, Amita; Kaur, Jasbir; Gupta, Viney; Nag, Tapas C

    2012-01-01

    To study ultrastructural changes of the trabecular meshwork in acute and chronic primary angle closure glaucoma (PACG) and primary open angle glaucoma (POAG) eyes by scanning electron microscopy. Twenty-one trabecular meshwork surgical specimens from consecutive glaucomatous eyes after a trabeculectomy and five postmortem corneoscleral specimens were fixed immediately in Karnovsky solution. The tissues were washed in 0.1 M phosphate buffer saline, post-fixed in 1% osmium tetraoxide, dehydrated in acetone series (30-100%), dried and mounted. Normal trabecular tissue showed well-defined, thin, cylindrical uveal trabecular beams with many large spaces, overlying flatter corneoscleral beams and numerous smaller spaces. In acute PACG eyes, the trabecular meshwork showed grossly swollen, irregular trabecular endothelial cells with intercellular and occasional basal separation with few spaces. Numerous activated macrophages, leucocytes and amorphous debris were present. Chronic PACG eyes had a few, thickened posterior uveal trabecular beams visible. A homogenous deposit covered the anterior uveal trabeculae and spaces. Converging, fan-shaped trabecular beam configuration corresponded to gonioscopic areas of peripheral anterior synechiae. In POAG eyes, anterior uveal trabecular beams were thin and strap-like, while those posteriorly were wide, with a homogenous deposit covering and bridging intertrabecular spaces, especially posteriorly. Underlying corneoscleral trabecular layers and spaces were visualized in some areas. In acute PACG a marked edema of the endothelium probably contributes for the acute and marked intraocular pressure (IOP) elevation. Chronically raised IOP in chronic PACG and POAG probably results, at least in part, from decreased aqueous outflow secondary to widening and fusion of adjacent trabecular beams, together with the homogenous deposit enmeshing trabecular beams and spaces.

  10. Structural Information from Single-molecule FRET Experiments Using the Fast Nano-positioning System

    PubMed Central

    Röcker, Carlheinz; Nagy, Julia; Michaelis, Jens

    2017-01-01

    Single-molecule Förster Resonance Energy Transfer (smFRET) can be used to obtain structural information on biomolecular complexes in real-time. Thereby, multiple smFRET measurements are used to localize an unknown dye position inside a protein complex by means of trilateration. In order to obtain quantitative information, the Nano-Positioning System (NPS) uses probabilistic data analysis to combine structural information from X-ray crystallography with single-molecule fluorescence data to calculate not only the most probable position but the complete three-dimensional probability distribution, termed posterior, which indicates the experimental uncertainty. The concept was generalized for the analysis of smFRET networks containing numerous dye molecules. The latest version of NPS, Fast-NPS, features a new algorithm using Bayesian parameter estimation based on Markov Chain Monte Carlo sampling and parallel tempering that allows for the analysis of large smFRET networks in a comparably short time. Moreover, Fast-NPS allows the calculation of the posterior by choosing one of five different models for each dye, that account for the different spatial and orientational behavior exhibited by the dye molecules due to their local environment. Here we present a detailed protocol for obtaining smFRET data and applying the Fast-NPS. We provide detailed instructions for the acquisition of the three input parameters of Fast-NPS: the smFRET values, as well as the quantum yield and anisotropy of the dye molecules. Recently, the NPS has been used to elucidate the architecture of an archaeal open promotor complex. This data is used to demonstrate the influence of the five different dye models on the posterior distribution. PMID:28287526

  11. Structural Information from Single-molecule FRET Experiments Using the Fast Nano-positioning System.

    PubMed

    Dörfler, Thilo; Eilert, Tobias; Röcker, Carlheinz; Nagy, Julia; Michaelis, Jens

    2017-02-09

    Single-molecule Förster Resonance Energy Transfer (smFRET) can be used to obtain structural information on biomolecular complexes in real-time. Thereby, multiple smFRET measurements are used to localize an unknown dye position inside a protein complex by means of trilateration. In order to obtain quantitative information, the Nano-Positioning System (NPS) uses probabilistic data analysis to combine structural information from X-ray crystallography with single-molecule fluorescence data to calculate not only the most probable position but the complete three-dimensional probability distribution, termed posterior, which indicates the experimental uncertainty. The concept was generalized for the analysis of smFRET networks containing numerous dye molecules. The latest version of NPS, Fast-NPS, features a new algorithm using Bayesian parameter estimation based on Markov Chain Monte Carlo sampling and parallel tempering that allows for the analysis of large smFRET networks in a comparably short time. Moreover, Fast-NPS allows the calculation of the posterior by choosing one of five different models for each dye, that account for the different spatial and orientational behavior exhibited by the dye molecules due to their local environment. Here we present a detailed protocol for obtaining smFRET data and applying the Fast-NPS. We provide detailed instructions for the acquisition of the three input parameters of Fast-NPS: the smFRET values, as well as the quantum yield and anisotropy of the dye molecules. Recently, the NPS has been used to elucidate the architecture of an archaeal open promotor complex. This data is used to demonstrate the influence of the five different dye models on the posterior distribution.

  12. Approximate Bayesian estimation of extinction rate in the Finnish Daphnia magna metapopulation.

    PubMed

    Robinson, John D; Hall, David W; Wares, John P

    2013-05-01

    Approximate Bayesian computation (ABC) is useful for parameterizing complex models in population genetics. In this study, ABC was applied to simultaneously estimate parameter values for a model of metapopulation coalescence and test two alternatives to a strict metapopulation model in the well-studied network of Daphnia magna populations in Finland. The models shared four free parameters: the subpopulation genetic diversity (θS), the rate of gene flow among patches (4Nm), the founding population size (N0) and the metapopulation extinction rate (e) but differed in the distribution of extinction rates across habitat patches in the system. The three models had either a constant extinction rate in all populations (strict metapopulation), one population that was protected from local extinction (i.e. a persistent source), or habitat-specific extinction rates drawn from a distribution with specified mean and variance. Our model selection analysis favoured the model including a persistent source population over the two alternative models. Of the closest 750,000 data sets in Euclidean space, 78% were simulated under the persistent source model (estimated posterior probability = 0.769). This fraction increased to more than 85% when only the closest 150,000 data sets were considered (estimated posterior probability = 0.774). Approximate Bayesian computation was then used to estimate parameter values that might produce the observed set of summary statistics. Our analysis provided posterior distributions for e that included the point estimate obtained from previous data from the Finnish D. magna metapopulation. Our results support the use of ABC and population genetic data for testing the strict metapopulation model and parameterizing complex models of demography. © 2013 Blackwell Publishing Ltd.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herberger, Sarah M.; Boring, Ronald L.

    Abstract Objectives: This paper discusses the differences between classical human reliability analysis (HRA) dependence and the full spectrum of probabilistic dependence. Positive influence suggests an error increases the likelihood of subsequent errors or success increases the likelihood of subsequent success. Currently the typical method for dependence in HRA implements the Technique for Human Error Rate Prediction (THERP) positive dependence equations. This assumes that the dependence between two human failure events varies at discrete levels between zero and complete dependence (as defined by THERP). Dependence in THERP does not consistently span dependence values between 0 and 1. In contrast, probabilistic dependencemore » employs Bayes Law, and addresses a continuous range of dependence. Methods: Using the laws of probability, complete dependence and maximum positive dependence do not always agree. Maximum dependence is when two events overlap to their fullest amount. Maximum negative dependence is the smallest amount that two events can overlap. When the minimum probability of two events overlapping is less than independence, negative dependence occurs. For example, negative dependence is when an operator fails to actuate Pump A, thereby increasing his or her chance of actuating Pump B. The initial error actually increases the chance of subsequent success. Results: Comparing THERP and probability theory yields different results in certain scenarios; with the latter addressing negative dependence. Given that most human failure events are rare, the minimum overlap is typically 0. And when the second event is smaller than the first event the max dependence is less than 1, as defined by Bayes Law. As such alternative dependence equations are provided along with a look-up table defining the maximum and maximum negative dependence given the probability of two events. Conclusions: THERP dependence has been used ubiquitously for decades, and has provided approximations of the dependencies between two events. Since its inception, computational abilities have increased exponentially, and alternative approaches that follow the laws of probability dependence need to be implemented. These new approaches need to consider negative dependence and identify when THERP output is not appropriate.« less

  14. Estimating the risk of bladder and kidney cancer from exposure to low-levels of arsenic in drinking water, Nova Scotia, Canada.

    PubMed

    Saint-Jacques, Nathalie; Brown, Patrick; Nauta, Laura; Boxall, James; Parker, Louise; Dummer, Trevor J B

    2018-01-01

    Arsenic in drinking water impacts health. Highest levels of arsenic have been historically observed in Taiwan and Bangladesh but the contaminant has been affecting the health of people globally. Strong associations have been confirmed between exposure to high-levels of arsenic in drinking water and a wide range of diseases, including cancer. However, at lower levels of exposure, especially near the current World Health Organization regulatory limit (10μg/L), this association is inconsistent as the effects are mostly extrapolated from high exposure studies. This ecological study used Bayesian inference to model the relative risk of bladder and kidney cancer at these lower concentrations-0-2μg/L; 2-5μg/L and; ≥5μg/L of arsenic-in 864 bladder and 525 kidney cancers diagnosed in the study area, Nova Scotia, Canada between 1998 and 2010. The model included proxy measures of lifestyle (e.g. smoking) and accounted for spatial dependencies. Overall, bladder cancer risk was 16% (2-5μg/L) and 18% (≥5μg/L) greater than that of the referent group (<2μg/L), with posterior probabilities of 88% and 93% for these risks being above 1. Effect sizes for kidney cancer were 5% (2-5μg/L) and 14% (≥5μg/L) above that of the referent group (<2μg/L), with probabilities of 61% and 84%. High-risk areas were common in southwestern areas, where higher arsenic-levels are associated with the local geology. The study suggests an increased bladder cancer, and potentially kidney cancer, risk from exposure to drinking water arsenic-levels within the current the World Health Organization maximum acceptable concentration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  16. Low-Cost Methodology for Skin Strain Measurement of a Flexed Biological Limb.

    PubMed

    Lin, Bevin; Moerman, Kevin M; McMahan, Connor G; Pasch, Kenneth A; Herr, Hugh M

    2017-12-01

    The purpose of this manuscript is to compute skin strain data from a flexed biological limb, using portable, inexpensive, and easily available resources. We apply and evaluate this approach on a person with bilateral transtibial amputations, imaging left and right residual limbs in extended and flexed knee postures. We map 3-D deformations to a flexed biological limb using freeware and a simple point-and-shoot camera. Mean principal strain, maximum shear strain, as well as lines of maximum, minimum, and nonextension are computed from 3-D digital models to inform directional mappings of the strain field for an unloaded residual limb. Peak tensile strains are ∼0.3 on the anterior surface of the knee in the proximal region of the patella, whereas peak compressive strains are ∼ -0.5 on the posterior surface of the knee. Peak maximum shear strains are ∼0.3 on the posterior surface of the knee. The accuracy and precision of this methodology are assessed for a ground-truth model. The mean point location distance is found to be 0.08 cm, and the overall standard deviation for point location difference vectors is 0.05 cm. This low-cost and mobile methodology may prove critical for applications such as the prosthetic socket interface where whole-limb skin strain data are required from patients in the field outside of traditional, large-scale clinical centers. Such data may inform the design of wearable technologies that directly interface with human skin.

  17. 3D in vivo femoro-tibial kinematics of tri-condylar total knee arthroplasty during kneeling activities.

    PubMed

    Nakamura, Shinichiro; Sharma, Adrija; Kobayashi, Masahiko; Ito, Hiromu; Nakamura, Kenji; Zingde, Sumesh M; Nakamura, Takashi; Komistek, Richard D

    2014-01-01

    Kneeling position can serve as an important posture, providing stability and balance from a standing position to sitting on the floor or vice-versa. The purpose of the current study was to determine the kinematics during kneeling activities after subjects were implanted with a tri-condylar total knee arthroplasty. Kinematics was evaluated in 54 knees using fluoroscopy and a three-dimensional model fitting approach. The average knee flexion at before contact status, at complete contact and at maximum flexion was 98.1±9.0°, 107.2±6.7°, and 139.6±12.3°, respectively. On average, there was no gross anterior displacement from before contact status to complete contact. Only slight posterior rollback motion of both condyles from complete contact to maximum flexion was observed. Three of 39 (7.7%) knees experienced anterior movement of both condyles more than 2mm from before contact status to complete contact. Reverse rotation pattern from before contact status to complete contact and then normal rotation pattern from complete contact to maximum flexion were observed. Condylar lift-off greater than 1.0 mm was observed in 45 knees (83.3%). The presence of the ball-and-socket joint articulation provides sufficient antero-posterior stability in these designs to enable the patients to kneel safely without the incidence of any dislocation. This study suggests a safe implant design for kneeling. © 2013.

  18. Novel technique for repairing posterior medial meniscus root tears using porcine knees and biomechanical study.

    PubMed

    Wu, Jia-Lin; Lee, Chian-Her; Yang, Chan-Tsung; Chang, Chia-Ming; Li, Guoan; Cheng, Cheng-Kung; Chen, Chih-Hwa; Huang, Hsu-Shan; Lai, Yu-Shu

    2018-01-01

    Transtibial pullout suture (TPS) repair of posterior medial meniscus root (PMMR) tears was shown to achieve good clinical outcomes. The purpose of this study was to compare biomechanically, a novel technique designed to repair PMMR tears using tendon graft (TG) and conventional TPS repair. Twelve porcine tibiae (n = 6 each) TG group: flexor digitorum profundus tendon was passed through an incision in the root area, created 5 mm postero-medially along the edge of the attachment area. TPS group: a modified Mason-Allen suture was created using no. 2 FiberWire. The tendon grafts and sutures were threaded through the bone tunnel and then fixed to the anterolateral cortex of the tibia. The two groups underwent cyclic loading followed by a load-to-failure test. Displacements of the constructs after 100, 500, and 1000 loading cycles, and the maximum load, stiffness, and elongation at failure were recorded. The TG technique had significantly lower elongation and higher stiffness compared with the TPS. The maximum load of the TG group was significantly lower than that of the TPS group. Failure modes for all specimens were caused by the suture or graft cutting through the meniscus. Lesser elongation and higher stiffness of the constructs in TG technique over those in the standard TPS technique might be beneficial for postoperative biological healing between the meniscus and tibial plateau. However, a slower rehabilitation program might be necessary due to its relatively lower maximum failure load.

  19. Unambiguous discrimination between linearly dependent equidistant states with multiple copies

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Hai; Ren, Gang

    2018-07-01

    Linearly independent quantum states can be unambiguously discriminated, but linearly dependent ones cannot. For linearly dependent quantum states, however, if C copies of the single states are available, then they may form linearly independent states, and can be unambiguously discriminated. We consider unambiguous discrimination among N = D + 1 linearly dependent states given that C copies are available and that the single copies span a D-dimensional space with equal inner products. The maximum unambiguous discrimination probability is derived for all C with equal a priori probabilities. For this classification of the linearly dependent equidistant states, our result shows that if C is even then adding a further copy fails to increase the maximum discrimination probability.

  20. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  1. An empirical investigation into the role of subjective prior probability in searching for potentially missing items

    PubMed Central

    Fanshawe, T. R.

    2015-01-01

    There are many examples from the scientific literature of visual search tasks in which the length, scope and success rate of the search have been shown to vary according to the searcher's expectations of whether the search target is likely to be present. This phenomenon has major practical implications, for instance in cancer screening, when the prevalence of the condition is low and the consequences of a missed disease diagnosis are severe. We consider this problem from an empirical Bayesian perspective to explain how the effect of a low prior probability, subjectively assessed by the searcher, might impact on the extent of the search. We show how the searcher's posterior probability that the target is present depends on the prior probability and the proportion of possible target locations already searched, and also consider the implications of imperfect search, when the probability of false-positive and false-negative decisions is non-zero. The theoretical results are applied to two studies of radiologists' visual assessment of pulmonary lesions on chest radiographs. Further application areas in diagnostic medicine and airport security are also discussed. PMID:26587267

  2. The Strength of Transosseous Medial Meniscal Root Repair Using a Simple Suture Technique Is Dependent on Suture Material and Position.

    PubMed

    Robinson, James R; Frank, Evelyn G; Hunter, Alan J; Jermin, Paul J; Gill, Harinderjit S

    2018-03-01

    A simple suture technique in transosseous meniscal root repair can provide equivalent resistance to cyclic load and is less technically demanding to perform compared with more complex suture configurations, yet maximum yield loads are lower. Various suture materials have been investigated for repair, but it is currently not clear which material is optimal in terms of repair strength. Meniscal root anatomy is also complex; consisting of the ligamentous mid-substance (root ligament), the transition zone between the meniscal body and root ligament; the relationship between suture location and maximum failure load has not been investigated in a simulated surgical repair. (A) Using a knottable, 2-mm-wide, ultra-high-molecular-weight polyethylene (UHMWPE) braided tape for transosseous meniscal root repair with a simple suture technique will give rise to a higher maximum failure load than a repair made using No. 2 UHMWPE standard suture material for simple suture repair. (B) Suture position is an important factor in determining the maximum failure load. Controlled laboratory study. In part A, the posterior root attachment of the medial meniscus was divided in 19 porcine knees. The tibias were potted, and repair of the medial meniscus posterior root was performed. A suture-passing device was used to place 2 simple sutures into the posterior root of the medial meniscus during a repair procedure that closely replicated single-tunnel, transosseous surgical repair commonly used in clinical practice. Ten tibias were randomized to repair with No. 2 suture (Suture group) and 9 tibias to repair with 2-mm-wide knottable braided tape (Tape group). The repair strength was assessed by maximum failure load measured by use of a materials testing machine. Micro-computed tomography (CT) scans were obtained to assess suture positions within the meniscus. The wide range of maximum failure load appeared related to suture position. In part B, 10 additional porcine knees were prepared. Five knees were randomized to the Suture group and 5 to the Tape group. All repairs were standardized for location, and the repair was placed in the body of the meniscus. A custom image registration routine was created to coregister all 29 menisci, which allowed the distribution of maximum failure load versus repair location to be visualized with a heat map. In part A, higher maximum failure load was found for the Tape group (mean, 86.7 N; 95% CI, 63.9-109.6 N) compared with the Suture group (mean, 57.2 N; 95% CI, 30.5-83.9 N). The 3D micro-CT analysis of suture position showed that the mean maximum failure load for repairs placed in the meniscus body (mean, 104 N; 95% CI, 81.2-128.0 N) was higher than for those placed in the root ligament (mean, 35.1 N; 95% CI, 15.7-54.5 N). In part B, the mean maximum failure load was significantly greater for the Tape group, 298.5 N ( P = .016, Mann-Whitney U; 95% CI, 183.9-413.1 N), compared with that for the Suture group, 146.8 N (95% CI, 82.4-211.6 N). Visualization with the heat map revealed that small variations in repair location on the meniscus were associated with large differences in maximum failure load; moving the repair entry point by 3 mm could reduce the failure load by 50%. The use of 2-mm braided tape provided higher maximum failure load than the use of a No. 2 suture. The position of the repair in the meniscus was also a highly significant factor in the properties of the constructs. The results provide insight into material and location for optimal repair strength.

  3. Pneumatic Control Device for the Pershing 2 Adaption Kit

    DTIC Science & Technology

    1979-03-14

    forward force to main- tain a pressure seal (this, versus an-I6-to 25 pound maximum reverse .force component due to pressure). In all probability, initial...stem forward force to main- tain a pressure seal (this, versus an 48-to-25-pound maximum " reverse.force, component due-topressue). In-all probability...PII Li L! Ramn Eniern Inc Contrato . 2960635 GAS GENERATOR COMPATIBILITY U TEST REPORT 1.j Requirement s The requirements for the Pershing II, Phase I

  4. Pinworm diversity in free-ranging howler monkeys (Alouatta spp.) in Mexico: Morphological and molecular evidence for two new Trypanoxyuris species (Nematoda: Oxyuridae).

    PubMed

    Solórzano-García, Brenda; Nadler, Steven A; Pérez-Ponce de León, Gerardo

    2016-10-01

    Two new species of Trypanoxyuris are described from the intestine of free-ranging howler monkeys in Mexico, Trypanoxyuris multilabiatus n. sp. from the mantled howler Alouatta palliata, and Trypanoxyuris pigrae n. sp. from the black howler Alouatta pigra. An integrative taxonomic approach is followed, where conspicuous morphological traits and phylogenetic trees based on DNA sequences are used to test the validity of the two new species. The mitochondrial cytochrome oxidase subunit 1 gene, and the nuclear ribosomal 18S and 28S rRNA genes were used for evolutionary analyses, with the concatenated dataset of all three genes used for maximum likelihood and Bayesian phylogenetic analyses. The two new species of pinworms from howler monkeys were morphologically distinct and formed reciprocally monophyletic lineages in molecular phylogenetic trees. The three species from howler monkeys, T. multilabiatus n. sp., T. pigrae n. sp., and Trypanoxyuris minutus, formed a monophyletic group with high bootstrap and posterior probability support values. Phylogenetic patterns inferred from sequence data support the hypothesis of a close evolutionary association between these primate hosts and their pinworm parasites. The results suggest that the diversity of pinworm parasites from Neotropical primates might be underestimated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Blending Multiple Nitrogen Dioxide Data Sources for Neighborhood Estimates of Long-Term Exposure for Health Research.

    PubMed

    Hanigan, Ivan C; Williamson, Grant J; Knibbs, Luke D; Horsley, Joshua; Rolfe, Margaret I; Cope, Martin; Barnett, Adrian G; Cowie, Christine T; Heyworth, Jane S; Serre, Marc L; Jalaludin, Bin; Morgan, Geoffrey G

    2017-11-07

    Exposure to traffic related nitrogen dioxide (NO 2 ) air pollution is associated with adverse health outcomes. Average pollutant concentrations for fixed monitoring sites are often used to estimate exposures for health studies, however these can be imprecise due to difficulty and cost of spatial modeling at the resolution of neighborhoods (e.g., a scale of tens of meters) rather than at a coarse scale (around several kilometers). The objective of this study was to derive improved estimates of neighborhood NO 2 concentrations by blending measurements with modeled predictions in Sydney, Australia (a low pollution environment). We implemented the Bayesian maximum entropy approach to blend data with uncertainty defined using informative priors. We compiled NO 2 data from fixed-site monitors, chemical transport models, and satellite-based land use regression models to estimate neighborhood annual average NO 2 . The spatial model produced a posterior probability density function of estimated annual average concentrations that spanned an order of magnitude from 3 to 35 ppb. Validation using independent data showed improvement, with root mean squared error improvement of 6% compared with the land use regression model and 16% over the chemical transport model. These estimates will be used in studies of health effects and should minimize misclassification bias.

  6. Estimators of The Magnitude-Squared Spectrum and Methods for Incorporating SNR Uncertainty

    PubMed Central

    Lu, Yang; Loizou, Philipos C.

    2011-01-01

    Statistical estimators of the magnitude-squared spectrum are derived based on the assumption that the magnitude-squared spectrum of the noisy speech signal can be computed as the sum of the (clean) signal and noise magnitude-squared spectra. Maximum a posterior (MAP) and minimum mean square error (MMSE) estimators are derived based on a Gaussian statistical model. The gain function of the MAP estimator was found to be identical to the gain function used in the ideal binary mask (IdBM) that is widely used in computational auditory scene analysis (CASA). As such, it was binary and assumed the value of 1 if the local SNR exceeded 0 dB, and assumed the value of 0 otherwise. By modeling the local instantaneous SNR as an F-distributed random variable, soft masking methods were derived incorporating SNR uncertainty. The soft masking method, in particular, which weighted the noisy magnitude-squared spectrum by the a priori probability that the local SNR exceeds 0 dB was shown to be identical to the Wiener gain function. Results indicated that the proposed estimators yielded significantly better speech quality than the conventional MMSE spectral power estimators, in terms of yielding lower residual noise and lower speech distortion. PMID:21886543

  7. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    PubMed

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  8. Climate Projections from the NARCliM Project: Bayesian Model Averaging of Maximum Temperature Projections

    NASA Astrophysics Data System (ADS)

    Olson, R.; Evans, J. P.; Fan, Y.

    2015-12-01

    NARCliM (NSW/ACT Regional Climate Modelling Project) is a regional climate project for Australia and the surrounding region. It dynamically downscales 4 General Circulation Models (GCMs) using three Regional Climate Models (RCMs) to provide climate projections for the CORDEX-AustralAsia region at 50 km resolution, and for south-east Australia at 10 km resolution. The project differs from previous work in the level of sophistication of model selection. Specifically, the selection process for GCMs included (i) conducting literature review to evaluate model performance, (ii) analysing model independence, and (iii) selecting models that span future temperature and precipitation change space. RCMs for downscaling the GCMs were chosen based on their performance for several precipitation events over South-East Australia, and on model independence.Bayesian Model Averaging (BMA) provides a statistically consistent framework for weighing the models based on their likelihood given the available observations. These weights are used to provide probability distribution functions (pdfs) for model projections. We develop a BMA framework for constructing probabilistic climate projections for spatially-averaged variables from the NARCliM project. The first step in the procedure is smoothing model output in order to exclude the influence of internal climate variability. Our statistical model for model-observations residuals is a homoskedastic iid process. Comparing RCMs with Australian Water Availability Project (AWAP) observations is used to determine model weights through Monte Carlo integration. Posterior pdfs of statistical parameters of model-data residuals are obtained using Markov Chain Monte Carlo. The uncertainty in the properties of the model-data residuals is fully accounted for when constructing the projections. We present the preliminary results of the BMA analysis for yearly maximum temperature for New South Wales state planning regions for the period 2060-2079.

  9. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.

    2013-12-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.

  10. Combining high-throughput sequencing and targeted loci data to infer the phylogeny of the "Adenocalymma-Neojobertia" clade (Bignonieae, Bignoniaceae).

    PubMed

    Fonseca, Luiz Henrique M; Lohmann, Lúcia G

    2018-06-01

    Combining high-throughput sequencing data with amplicon sequences allows the reconstruction of robust phylogenies based on comprehensive sampling of characters and taxa. Here, we combine Next Generation Sequencing (NGS) and Sanger sequencing data to infer the phylogeny of the "Adenocalymma-Neojobertia" clade (Bignonieae, Bignoniaceae), a diverse lineage of Neotropical plants, using Maximum Likelihood and Bayesian approaches. We used NGS to obtain complete or nearly-complete plastomes of members of this clade, leading to a final dataset with 54 individuals, representing 44 members of ingroup and 10 outgroups. In addition, we obtained Sanger sequences of two plastid markers (ndhF and rpl32-trnL) for 44 individuals (43 ingroup and 1 outgroup) and the nuclear PepC for 64 individuals (63 ingroup and 1 outgroup). Our final dataset includes 87 individuals of members of the "Adenocalymma-Neojobertia" clade, representing 66 species (ca. 90% of the diversity), plus 11 outgroups. Plastid and nuclear datasets recovered congruent topologies and were combined. The combined analysis recovered a monophyletic "Adenocalymma-Neojobertia" clade and a paraphyletic Adenocalymma that also contained a monophyletic Neojobertia plus Pleonotoma albiflora. Relationships are strongly supported in all analyses, with most lineages within the "Adenocalymma-Neojobertia" clade receiving maximum posterior probabilities. Ancestral character state reconstructions using Bayesian approaches identified six morphological synapomorphies of clades namely, prophyll type, petiole and petiolule articulation, tendril ramification, inflorescence ramification, calyx shape, and fruit wings. Other characters such as habit, calyx cupular trichomes, corolla color, and corolla shape evolved multiple times. These characters are putatively related with the clade diversification and can be further explored in diversification studies. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Multiple Neural Mechanisms of Decision Making and Their Competition under Changing Risk Pressure

    PubMed Central

    Kolling, Nils; Wittmann, Marco; Rushworth, Matthew F.S.

    2014-01-01

    Summary Sometimes when a choice is made, the outcome is not guaranteed and there is only a probability of its occurrence. Each individual’s attitude to probability, sometimes called risk proneness or aversion, has been assumed to be static. Behavioral ecological studies, however, suggest such attitudes are dynamically modulated by the context an organism finds itself in; in some cases, it may be optimal to pursue actions with a low probability of success but which are associated with potentially large gains. We show that human subjects rapidly adapt their use of probability as a function of current resources, goals, and opportunities for further foraging. We demonstrate that dorsal anterior cingulate cortex (dACC) carries signals indexing the pressure to pursue unlikely choices and signals related to the taking of such choices. We show that dACC exerts this control over behavior when it, rather than ventromedial prefrontal cortex, interacts with posterior cingulate cortex. PMID:24607236

  12. 50 CFR 648.100 - Catch quotas and other restrictions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... at least a 50-percent probability of success, a fishing mortality rate (F) that produces the maximum... probability of success, that the F specified in paragraph (a) of this section will not be exceeded: (1... necessary to ensure, with at least a 50-percent probability of success, that the applicable specified F will...

  13. Anatomic Tumor Location Influences the Success of Contemporary Limb-Sparing Surgery and Radiation Among Adults With Soft Tissue Sarcomas of the Extremities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korah, Mariam P., E-mail: mariam.philip@gmail.com; Deyrup, Andrea T.; Monson, David K.

    2012-02-01

    Purpose: To examine the influence of anatomic location in the upper extremity (UE) vs. lower extremity (LE) on the presentation and outcomes of adult soft tissue sarcomas (STS). Methods and Materials: From 2001 to 2008, 118 patients underwent limb-sparing surgery (LSS) and external beam radiotherapy (RT) with curative intent for nonrecurrent extremity STS. RT was delivered preoperatively in 96 and postoperatively in 22 patients. Lesions arose in the UE in 28 and in the LE in 90 patients. Patients with UE lesions had smaller tumors (4.5 vs. 9.0 cm, p < 0.01), were more likely to undergo a prior excisionmore » (43 vs. 22%, p = 0.03), to have close or positive margins after resection (71 vs. 49%, p = 0.04), and to undergo postoperative RT (32 vs. 14%, p = 0.04). Results: Five-year actuarial local recurrence-free and distant metastasis-free survival rates for the entire group were 85 and 74%, with no difference observed between the UE and LE cohorts. Five-year actuarial probability of wound reoperation rates were 4 vs. 29% (p < 0.01) in the UE and LE respectively. Thigh lesions accounted for 84% of the required wound reoperations. The distribution of tumors within the anterior, medial, and posterior thigh compartments was 51%, 26%, and 23%. Subset analysis by compartment showed no difference in the probability of wound reoperation between the anterior and medial/posterior compartments (29 vs. 30%, p = 0.68). Neurolysis was performed during resection in (15%, 5%, and 67%, p < 0.01) of tumors in the anterior, medial, and posterior compartments. Conclusions: Tumors in the UE and LE differ significantly with respect to size and management details. The anatomy of the UE poses technical impediments to an R0 resection. Thigh tumors are associated with higher wound reoperation rates. Tumor resection in the posterior thigh compartment is more likely to result in nerve injury. A better understanding of the inherent differences between tumors in various extremity sites will assist in individualizing treatment.« less

  14. Articaine for supplemental intraosseous anesthesia in patients with irreversible pulpitis.

    PubMed

    Bigby, Jason; Reader, Al; Nusstein, John; Beck, Mike; Weaver, Joel

    2006-11-01

    The purpose of this study was to determine the anesthetic efficacy and heart rate effect of 4% articaine with 1:100,000 epinephrine for supplemental intraosseous injection in mandibular posterior teeth diagnosed with irreversible pulpitis. Thirty-seven emergency patients, diagnosed with irreversible pulpitis of a mandibular posterior tooth, received an inferior alveolar nerve block and had moderate-to-severe pain upon endodontic access. The Stabident system was used to administer 1.8 ml of 4% articaine with 1:100,000 epinephrine. Success of the intraosseous injection was defined as none or mild pain upon endodontic access or initial instrumentation. The results demonstrated that anesthetic success was obtained in 86% (32 of 37) of the patients. Maximum mean heart rate was increased 32 beats/minute during the intraosseous injection. We can conclude that when the inferior alveolar nerve block fails to provide profound pulpal anesthesia, the intraosseous injection of 4% articaine with 1:100,000 epinephrine would be successful 86% of the time in achieving pulpal anesthesia in mandibular posterior teeth of patients presenting with irreversible pulpitis.

  15. A person is not a number: discourse involvement in subject-verb agreement computation.

    PubMed

    Mancini, Simona; Molinaro, Nicola; Rizzi, Luigi; Carreiras, Manuel

    2011-09-02

    Agreement is a very important mechanism for language processing. Mainstream psycholinguistic research on subject-verb agreement processing has emphasized the purely formal and encapsulated nature of this phenomenon, positing an equivalent access to person and number features. However, person and number are intrinsically different, because person conveys extra-syntactic information concerning the participants in the speech act. To test the person-number dissociation hypothesis we investigated the neural correlates of subject-verb agreement in Spanish, using person and number violations. While number agreement violations produced a left-anterior negativity followed by a P600 with a posterior distribution, the negativity elicited by person anomalies had a centro-posterior maximum and was followed by a P600 effect that was frontally distributed in the early phase and posteriorly distributed in the late phase. These data reveal that the parser is differentially sensitive to the two features and that it deals with the two anomalies by adopting different strategies, due to the different levels of analysis affected by the person and number violations. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Classification of change detection and change blindness from near-infrared spectroscopy signals

    NASA Astrophysics Data System (ADS)

    Tanaka, Hirokazu; Katura, Takusige

    2011-08-01

    Using a machine-learning classification algorithm applied to near-infrared spectroscopy (NIRS) signals, we classify a success (change detection) or a failure (change blindness) in detecting visual changes for a change-detection task. Five subjects perform a change-detection task, and their brain activities are continuously monitored. A support-vector-machine algorithm is applied to classify the change-detection and change-blindness trials, and correct classification probability of 70-90% is obtained for four subjects. Two types of temporal shapes in classification probabilities are found: one exhibiting a maximum value after the task is completed (postdictive type), and another exhibiting a maximum value during the task (predictive type). As for the postdictive type, the classification probability begins to increase immediately after the task completion and reaches its maximum in about the time scale of neuronal hemodynamic response, reflecting a subjective report of change detection. As for the predictive type, the classification probability shows an increase at the task initiation and is maximal while subjects are performing the task, predicting the task performance in detecting a change. We conclude that decoding change detection and change blindness from NIRS signal is possible and argue some future applications toward brain-machine interfaces.

  17. Entropy Methods For Univariate Distributions in Decision Analysis

    NASA Astrophysics Data System (ADS)

    Abbas, Ali E.

    2003-03-01

    One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.

  18. Myopic Macular Retinoschisis in Teenagers: Clinical Characteristics and Spectral Domain Optical Coherence Tomography Findings

    PubMed Central

    Sun, Chuan-bin; You, Yong-sheng; Liu, Zhe; Zheng, Lin-yan; Chen, Pei-qing; Yao, Ke; Xue, An-quan

    2016-01-01

    To investigate the morphological characteristics of myopic macular retinoschisis (MRS) in teenagers with high myopia, six male (9 eyes) and 3 female (4 eyes) teenagers with typical MRS identified from chart review were evaluated. All cases underwent complete ophthalmic examinations including best corrected visual acuity (BCVA), indirect ophthalmoscopy, colour fundus photography, B-type ultrasonography, axial length measurement, and spectral-domain optical coherence tomography (SD-OCT). The average age was 17.8 ± 1.5 years, average refractive error was −17.04 ± 3.04D, average BCVA was 0.43 ± 0.61, and average axial length was 30.42 ± 1.71 mm. Myopic macular degenerative changes (MDC) by colour fundus photographs revealed Ohno-Matsui Category 1 in 4 eyes, and Category 2 in 9 eyes. Posterior staphyloma was found in 9 eyes. SD-OCT showed outer MRS in all 13 eyes, internal limiting membrane detachment in 7 eyes, vascular microfolds in 2 eyes, and inner MRS in 1 eye. No premacular structures such as macular epiretinal membrane or partially detached posterior hyaloids were found. Our results showed that MRS rarely occurred in highly myopic teenagers, and was not accompanied by premacular structures, severe MDC, or even obvious posterior staphyloma. This finding indicates that posterior scleral expansion is probably the main cause of MRS. PMID:27294332

  19. Random Partition Distribution Indexed by Pairwise Information

    PubMed Central

    Dahl, David B.; Day, Ryan; Tsai, Jerry W.

    2017-01-01

    We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318

  20. Probabilistic assessment of precipitation-triggered landslides using historical records of landslide occurence, Seattle, Washington

    USGS Publications Warehouse

    Coe, J.A.; Michael, J.A.; Crovelli, R.A.; Savage, W.Z.; Laprade, W.T.; Nashem, W.D.

    2004-01-01

    Ninety years of historical landslide records were used as input to the Poisson and binomial probability models. Results from these models show that, for precipitation-triggered landslides, approximately 9 percent of the area of Seattle has annual exceedance probabilities of 1 percent or greater. Application of the Poisson model for estimating the future occurrence of individual landslides results in a worst-case scenario map, with a maximum annual exceedance probability of 25 percent on a hillslope near Duwamish Head in West Seattle. Application of the binomial model for estimating the future occurrence of a year with one or more landslides results in a map with a maximum annual exceedance probability of 17 percent (also near Duwamish Head). Slope and geology both play a role in localizing the occurrence of landslides in Seattle. A positive correlation exists between slope and mean exceedance probability, with probability tending to increase as slope increases. Sixty-four percent of all historical landslide locations are within 150 m (500 ft, horizontal distance) of the Esperance Sand/Lawton Clay contact, but within this zone, no positive or negative correlation exists between exceedance probability and distance to the contact.

  1. Added value of non-calibrated and BMA calibrated AEMET-SREPS probabilistic forecasts: the 24 January 2009 extreme wind event over Catalonia

    NASA Astrophysics Data System (ADS)

    Escriba, P. A.; Callado, A.; Santos, D.; Santos, C.; Simarro, J.; García-Moya, J. A.

    2009-09-01

    At 00 UTC 24 January 2009 an explosive ciclogenesis originated over the Atlantic Ocean reached its maximum intensity with observed surface pressures lower than 970 hPa on its center and placed at Gulf of Vizcaya. During its path through southern France this low caused strong westerly and north-westerly winds over the Iberian Peninsula higher than 150 km/h at some places. These extreme winds leaved 10 casualties in Spain, 8 of them in Catalonia. The aim of this work is to show whether exists an added value in the short range prediction of the 24 January 2009 strong winds when using the Short Range Ensemble Prediction System (SREPS) of the Spanish Meteorological Agency (AEMET), with respect to the operational forecasting tools. This study emphasizes two aspects of probabilistic forecasting: the ability of a 3-day forecast of warn an extreme windy event and the ability of quantifying the predictability of the event so that giving value to deterministic forecast. Two type of probabilistic forecasts of wind are carried out, a non-calibrated and a calibrated one using Bayesian Model Averaging (BMA). AEMET runs daily experimentally SREPS twice a day (00 and 12 UTC). This system consists of 20 members that are constructed by integrating 5 local area models, COSMO (COSMO), HIRLAM (HIRLAM Consortium), HRM (DWD), MM5 (NOAA) and UM (UKMO), at 25 km of horizontal resolution. Each model uses 4 different initial and boundary conditions, the global models GFS (NCEP), GME (DWD), IFS (ECMWF) and UM. By this way it is obtained a probabilistic forecast that takes into account the initial, the contour and the model errors. BMA is a statistical tool for combining predictive probability functions from different sources. The BMA predictive probability density function (PDF) is a weighted average of PDFs centered on the individual bias-corrected forecasts. The weights are equal to posterior probabilities of the models generating the forecasts and reflect the skill of the ensemble members. Here BMA is applied to provide probabilistic forecasts of wind speed. In this work several forecasts for different time ranges (H+72, H+48 and H+24) of 10 meters wind speed over Catalonia are verified subjectively at one of the instants of maximum intensity, 12 UTC 24 January 2009. On one hand, three probabilistic forecasts are compared, ECMWF EPS, non-calibrated SREPS and calibrated SREPS. On the other hand, the relationship between predictability and skill of deterministic forecast is studied by looking at HIRLAM 0.16 deterministic forecasts of the event. Verification is focused on location and intensity of 10 meters wind speed and 10-minutal measures from AEMET automatic ground stations are used as observations. The results indicate that SREPS is able to forecast three days ahead mean winds higher than 36 km/h and that correctly localizes them with a significant probability of ocurrence in the affected area. The probability is higher after BMA calibration of the ensemble. The fact that probability of strong winds is high allows us to state that the predictability of the event is also high and, as a consequence, deterministic forecasts are more reliable. This is confirmed when verifying HIRLAM deterministic forecasts against observed values.

  2. Application of the Markov Chain Monte Carlo method for snow water equivalent retrieval based on passive microwave measurements

    NASA Astrophysics Data System (ADS)

    Pan, J.; Durand, M. T.; Vanderjagt, B. J.

    2015-12-01

    Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.

  3. Similar early migration when comparing CR and PS in Triathlon™ TKA: A prospective randomised RSA trial.

    PubMed

    Molt, Mats; Toksvig-Larsen, Sören

    2014-10-01

    The objective of this study was to compare the early migration of the cruciate retaining and posterior stabilising versions of the recently introduced Triathlon™ total knee system, with a view to predicting long term fixation performance. Sixty patients were prospectively randomised to receive either Triathlon™ posterior stabilised cemented knee prosthesis or Triathlon™ cruciate retaining cemented knee prosthesis. Tibial component migration was measured by radiostereometric analysis postoperatively and at three months, one year and two years. Clinical outcome was measured by the American Knee Society Score and Knee Osteoarthritis and Injury Outcome Score. There were no differences in rotation around the three coordinal axes or in the maximum total point motion (MTPM) during the two year follow-up. The posterior stabilised prosthesis had more posterior-anterior translation at three months and one year and more caudal-cranial translation at one year and two years. There were no differences in functional outcome between the groups. The tibial tray of the Triathlon™ cemented knee prosthesis showed similar early stability. Level I. Article focus: This was a prospective randomised trial aiming to compare the single radius posterior stabilised (PS) Triathlon™ total knee arthroplasty (TKA) to the cruciate retaining Triathlon™ TKA system with regard to fixation. Strengths and limitations of this study: Strength of this study was that it is a randomised prospective trial using an objective measuring tool. The sample size of 25-30 patients was reportedly sufficient for the screening of implants using RSA [1]. ClinicalTrials.gov Identifier: NCT00436982. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Functional compartmentalization of the human superficial masseter muscle.

    PubMed

    Guzmán-Venegas, Rodrigo A; Biotti Picand, Jorge L; de la Rosa, Francisco J Berral

    2015-01-01

    Some muscles have demonstrated a differential recruitment of their motor units in relation to their location and the nature of the motor task performed; this involves functional compartmentalization. There is little evidence that demonstrates the presence of a compartmentalization of the superficial masseter muscle during biting. The aim of this study was to describe the topographic distribution of the activity of the superficial masseter (SM) muscle's motor units using high-density surface electromyography (EMGs) at different bite force levels. Twenty healthy natural dentate participants (men: 4; women: 16; age 20±2 years; mass: 60±12 kg, height: 163±7 cm) were selected from 316 volunteers and included in this study. Using a gnathodynamometer, bites from 20 to 100% maximum voluntary bite force (MVBF) were randomly requested. Using a two-dimensional grid (four columns, six electrodes) located on the dominant SM, EMGs in the anterior, middle-anterior, middle-posterior and posterior portions were simultaneously recorded. In bite ranges from 20 to 60% MVBF, the EMG activity was higher in the anterior than in the posterior portion (p-value = 0.001).The center of mass of the EMG activity was displaced towards the posterior part when bite force increased (p-value = 0.001). The topographic distribution of EMGs was more homogeneous at high levels of MVBF (p-value = 0.001). The results of this study show that the superficial masseter is organized into three functional compartments: an anterior, a middle and a posterior compartment. However, this compartmentalization is only seen at low levels of bite force (20-60% MVBF).

  5. Functional Compartmentalization of the Human Superficial Masseter Muscle

    PubMed Central

    Guzmán-Venegas, Rodrigo A.; Biotti Picand, Jorge L.; de la Rosa, Francisco J. Berral

    2015-01-01

    Some muscles have demonstrated a differential recruitment of their motor units in relation to their location and the nature of the motor task performed; this involves functional compartmentalization. There is little evidence that demonstrates the presence of a compartmentalization of the superficial masseter muscle during biting. The aim of this study was to describe the topographic distribution of the activity of the superficial masseter (SM) muscle’s motor units using high-density surface electromyography (EMGs) at different bite force levels. Twenty healthy natural dentate participants (men: 4; women: 16; age 20±2 years; mass: 60±12 kg, height: 163±7 cm) were selected from 316 volunteers and included in this study. Using a gnathodynamometer, bites from 20 to 100% maximum voluntary bite force (MVBF) were randomly requested. Using a two-dimensional grid (four columns, six electrodes) located on the dominant SM, EMGs in the anterior, middle-anterior, middle-posterior and posterior portions were simultaneously recorded. In bite ranges from 20 to 60% MVBF, the EMG activity was higher in the anterior than in the posterior portion (p-value = 0.001).The center of mass of the EMG activity was displaced towards the posterior part when bite force increased (p-value = 0.001). The topographic distribution of EMGs was more homogeneous at high levels of MVBF (p-value = 0.001). The results of this study show that the superficial masseter is organized into three functional compartments: an anterior, a middle and a posterior compartment. However, this compartmentalization is only seen at low levels of bite force (20–60% MVBF). PMID:25692977

  6. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    NASA Astrophysics Data System (ADS)

    Tan, Elcin

    A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the physically possible upper limits of precipitation due to climate change. The simulation results indicate that the meridional shift in atmospheric conditions is the optimum method to determine maximum precipitation in consideration of cost and efficiency. Finally, exceedance probability analyses of the model results of 42 historical extreme precipitation events demonstrate that the 72-hr basin averaged probable maximum precipitation is 21.72 inches for the exceedance probability of 0.5 percent. On the other hand, the current operational PMP estimation for the American River Watershed is 28.57 inches as published in the hydrometeorological report no. 59 and a previous PMP value was 31.48 inches as published in the hydrometeorological report no. 36. According to the exceedance probability analyses of this proposed method, the exceedance probabilities of these two estimations correspond to 0.036 percent and 0.011 percent, respectively.

  7. Percutaneous left atrial appendage closure vs warfarin for atrial fibrillation: a randomized clinical trial.

    PubMed

    Reddy, Vivek Y; Sievert, Horst; Halperin, Jonathan; Doshi, Shephal K; Buchbinder, Maurice; Neuzil, Petr; Huber, Kenneth; Whisenant, Brian; Kar, Saibal; Swarup, Vijay; Gordon, Nicole; Holmes, David

    2014-11-19

    While effective in preventing stroke in patients with atrial fibrillation (AF), warfarin is limited by a narrow therapeutic profile, a need for lifelong coagulation monitoring, and multiple drug and diet interactions. To determine whether a local strategy of mechanical left atrial appendage (LAA) closure was noninferior to warfarin. PROTECT AF was a multicenter, randomized (2:1), unblinded, Bayesian-designed study conducted at 59 hospitals of 707 patients with nonvalvular AF and at least 1 additional stroke risk factor (CHADS2 score ≥1). Enrollment occurred between February 2005 and June 2008 and included 4-year follow-up through October 2012. Noninferiority required a posterior probability greater than 97.5% and superiority a probability of 95% or greater; the noninferiority margin was a rate ratio of 2.0 comparing event rates between treatment groups. Left atrial appendage closure with the device (n = 463) or warfarin (n = 244; target international normalized ratio, 2-3). A composite efficacy end point including stroke, systemic embolism, and cardiovascular/unexplained death, analyzed by intention-to-treat. At a mean (SD) follow-up of 3.8 (1.7) years (2621 patient-years), there were 39 events among 463 patients (8.4%) in the device group for a primary event rate of 2.3 events per 100 patient-years, compared with 34 events among 244 patients (13.9%) for a primary event rate of 3.8 events per 100 patient-years with warfarin (rate ratio, 0.60; 95% credible interval, 0.41-1.05), meeting prespecified criteria for both noninferiority (posterior probability, >99.9%) and superiority (posterior probability, 96.0%). Patients in the device group demonstrated lower rates of both cardiovascular mortality (1.0 events per 100 patient-years for the device group [17/463 patients, 3.7%] vs 2.4 events per 100 patient-years with warfarin [22/244 patients, 9.0%]; hazard ratio [HR], 0.40; 95% CI, 0.21-0.75; P = .005) and all-cause mortality (3.2 events per 100 patient-years for the device group [57/466 patients, 12.3%] vs 4.8 events per 100 patient-years with warfarin [44/244 patients, 18.0%]; HR, 0.66; 95% CI, 0.45-0.98; P = .04). After 3.8 years of follow-up among patients with nonvalvular AF at elevated risk for stroke, percutaneous LAA closure met criteria for both noninferiority and superiority, compared with warfarin, for preventing the combined outcome of stroke, systemic embolism, and cardiovascular death, as well as superiority for cardiovascular and all-cause mortality. clinicaltrials.gov Identifier: NCT00129545.

  8. MRI of the knee.

    PubMed

    Skinner, Sarah

    2012-11-01

    Magnetic resonance imaging (MRI) is the gold standard in noninvasive investigation of knee pain. It has a very high negative predictive value and may assist in avoiding unnecessary knee arthroscopy; its accuracy in the diagnosis of meniscal and anterior cruciate ligament (ACL) tears is greater than 89%; it has a greater than 90% sensitivity for the detection of medial meniscal tears; and it is probably better at assessing the posterior horn than arthroscopy.

  9. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  10. A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.

    PubMed

    Suk, Heung-Il; Lee, Seong-Whan

    2013-02-01

    As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.

  11. The Nature and Control of Postural Adaptations of Boys with and without Developmental Coordination Disorder

    ERIC Educational Resources Information Center

    Przysucha, Eryk P.; Taylor, M. Jane; Weber, Douglas

    2008-01-01

    This study compared the nature of postural adaptations and control tendencies, between 7 (n = 9) and 11-year-old boys (n = 10) with Developmental Coordination Disorder (DCD) and age-matched, younger (n = 10) and older (n = 9) peers in a leaning task. Examination of anterior-posterior, medio-lateral, maximum and mean area of sway, and path length…

  12. SU-F-T-188: A Robust Treatment Planning Technique for Proton Pencil Beam Scanning Cranial Spinal Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, M; Mehta, M; Badiyan, S

    2016-06-15

    Purpose: To propose a proton pencil beam scanning (PBS) cranial spinal irradiation (CSI) treatment planning technique robust against patient roll, isocenter offset and proton range uncertainty. Method: Proton PBS plans were created (Eclipse V11) for three previously treated CSI patients to 36 Gy (1.8 Gy/fractions). The target volume was separated into three regions: brain, upper spine and lower spine. One posterior-anterior (PA) beam was used for each spine region, and two posterior-oblique beams (15° apart from PA direction, denoted as 2PO-15) for the brain region. For comparison, another plan using one PA beam for the brain target (denoted as 1PA)more » was created. Using the same optimization objectives, 98% CTV was optimized to receive the prescription dose. To evaluate plan robustness against patient roll, the gantry angle was increased by 3° and dose was recalculated without changing the proton spot weights. On the re-calculated plan, doses were then calculated using 12 scenarios that are combinations of isocenter shift (±3mm in X, Y, and Z directions) and proton range variation (±3.5%). The worst-case-scenario (WCS) brain CTV dosimetric metrics were compared to the nominal plan. Results: For both beam arrangements, the brain field(s) and upper-spine field overlap in the T2–T5 region depending on patient anatomy. The maximum monitor unit per spot were 48.7%, 47.2%, and 40.0% higher for 1PA plans than 2PO-15 plans for the three patients. The 2PO-15 plans have better dose conformity. At the same level of CTV coverage, the 2PO-15 plans have lower maximum dose and higher minimum dose to the CTV. The 2PO-15 plans also showed lower WCS maximum dose to CTV, while the WCS minimum dose to CTV were comparable between the two techniques. Conclusion: Our method of using two posterior-oblique beams for brain target provides improved dose conformity and homogeneity, and plan robustness including patient roll.« less

  13. Dose-related difference in progression rates of cytomegalovirus retinopathy during foscarnet maintenance therapy. AIDS Clinical Trials Group Protocol 915 Team.

    PubMed

    Holland, G N; Levinson, R D; Jacobson, M A

    1995-05-01

    A previous dose-ranging study of foscarnet maintenance therapy for cytomegalovirus retinopathy showed a positive relationship between dose and survival but could not confirm a relationship between dose and time to first progression. This retrospective analysis of data from that study was undertaken to determine whether there was a relationship between dose and progression rates, which reflects the amount of retina destroyed when progression occurs. Patients were randomly given one of two foscarnet maintenance therapy doses (90 mg/kg of body weight/day [FOS-90 group] or 120 mg/kg of body weight/day [FOS-120 group] after induction therapy. Using baseline and follow-up photographs and pre-established definitions and methodology in a masked analysis, posterior progression rates and foveal proximity rates for individual lesions, selected by prospectively defined criteria, were calculated in each patient. Rates were compared between groups. The following median rates were greater for the FOS-90 group (N = 8) than for the FOS-120 group (N = 10): greatest maximum rate at which lesions enlarged in a posterior direction (43.5 vs 12.5 microns/day; P = .002); posterior progression rate for lesions closest to the fovea (42.8 vs 5.5 microns/day; P = .010); and maximum foveal proximity rate for either eye (32.3 vs 3.4 microns/day; P = .031). Patients receiving higher doses of foscarnet have slower rates of progression and therefore less retinal tissue damage during maintenance therapy. A foscarnet maintenance therapy dose of 120 mg/kg of body weight/day instead of 90 mg/kg of body weight/day may help to preserve vision in patients with cytomegalovirus retinopathy.

  14. Excessive glenohumeral horizontal abduction as occurs during the late cocking phase of the throwing motion can be critical for internal impingement.

    PubMed

    Mihata, Teruhisa; McGarry, Michelle H; Kinoshita, Mitsuo; Lee, Thay Q

    2010-02-01

    The objective of this study was to determine the effects of increased horizontal abduction with maximum external rotation, as occurs during the late cocking phase of throwing motion, on shoulder internal impingement. An increase in glenohumeral horizontal abduction will cause overlap of the rotator cuff insertion with respect to the glenoid and increase pressure between the supraspinatus and infraspinatus tendon insertions on the greater tuberosity and the glenoid. Controlled laboratory study. Eight cadaveric shoulders were tested with a custom shoulder testing system with the specimens in 60 degrees of glenohumeral abduction and maximum external rotation. The amount of internal impingement was evaluated by assessing the location of the supraspinatus and infraspinatus articular insertions on the greater tuberosity relative to the glenoid using a MicroScribe 3DLX. Pressure in the posterior-superior quadrant of the glenoid was measured using Fuji prescale film. Data were obtained with the humerus in the scapular plane and 15 degrees , 30 degrees , and 45 degrees of horizontal abduction from the scapular plane. At 30 degrees and 45 degrees of horizontal abduction, the articular margin of the supraspinatus and infraspinatus tendons was anterior to the posterior edge of the glenoid and less than 2 mm from the glenoid rim in the lateral direction; the contact pressure was also greater than that found in the scapular plane and 15 degrees of horizontal abduction. Conclusion Horizontal abduction beyond the coronal plane increased the amount of overlap and contact pressure between the supraspinatus and infraspinatus tendons and glenoid. Excessive glenohumeral horizontal abduction beyond the coronal plane may cause internal impingement, which may lead to rotator cuff tears and superior labral anterior to posterior (SLAP) lesions.

  15. Egg structure and ultrastructure of Paterdecolyus yanbarensis (Insecta, Orthoptera, Anostostomatidae, Anabropsinae).

    PubMed

    Mashimo, Yuta; Fukui, Makiko; Machida, Ryuichiro

    2016-11-01

    The egg structure of Paterdecolyus yanbarensis was examined using light, scanning electron and transmission electron microscopy. The egg surface shows a distinct honeycomb pattern formed by exochorionic ridges. Several micropyles are clustered on the ventral side of the egg. The egg membrane is composed of an exochorion penetrated with numerous aeropyles, an endochorion, and an extremely thin vitelline membrane. The endochorion is thickened at the posterior egg pole, probably associated with water absorption. A comparison of egg structure among Orthoptera revealed that the micropylar distribution pattern is conserved in Ensifera and Caelifera and might be regarded as a groundplan feature for each group; in Ensifera, multiple micropyles are clustered on the ventral side of the egg, whereas in Caelifera, micropyles are arranged circularly around the posterior pole of the egg. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Structured filtering

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Wiebe, Nathan

    2017-08-01

    A major challenge facing existing sequential Monte Carlo methods for parameter estimation in physics stems from the inability of existing approaches to robustly deal with experiments that have different mechanisms that yield the results with equivalent probability. We address this problem here by proposing a form of particle filtering that clusters the particles that comprise the sequential Monte Carlo approximation to the posterior before applying a resampler. Through a new graphical approach to thinking about such models, we are able to devise an artificial-intelligence based strategy that automatically learns the shape and number of the clusters in the support of the posterior. We demonstrate the power of our approach by applying it to randomized gap estimation and a form of low circuit-depth phase estimation where existing methods from the physics literature either exhibit much worse performance or even fail completely.

  17. The right parietal cortex and time perception: back to Critchley and the Zeitraffer phenomenon.

    PubMed

    Alexander, Iona; Cowey, Alan; Walsh, Vincent

    2005-05-01

    We investigated the involvement of the posterior parietal cortex in time perception by temporarily disrupting normal functioning in this region, in subjects making prospective judgements of time or pitch. Disruption of the right posterior parietal cortex significantly slowed reaction times when making time, but not pitch, judgements. Similar interference with the left parietal cortex and control stimulation over the vertex did not significantly change performance on either pitch or time tasks. The results show that the information processing necessary for temporal judgements involves the parietal cortex, probably to optimise spatiotemporal accuracy in voluntary action. The results are in agreement with a recent neuroimaging study and are discussed with regard to a psychological model of temporal processing and a recent proposal that time is part of a parietal cortex system for encoding magnitude information relevant for action.

  18. The Influence of Oropalatal Dimensions on the Measurement of Tongue Strength.

    PubMed

    Pitts, Laura L; Stierwalt, Julie A G; Hageman, Carlin F; LaPointe, Leonard L

    2017-12-01

    Tongue strength is routinely evaluated in clinical swallowing evaluations since lingual weakness is an established contributor to dysphagia. Tongue strength may be clinically quantified by the maximum isometric tongue pressure (MIP) generated by the tongue against the palate; however, wide ranges in normal performance remain to be fully explained. Although orthodontic theory has long suggested a relation between lingual function and oral cavity dimensions, little attention has been given to the potential influence of oral and palatal structure(s) on healthy variance in MIP generation. Therefore, anterior and posterior tongue strength measures and oropalatal dimensions were obtained across 147 healthy adults (aged 18-88 years). Age was confirmed as a significant, independent predictor explaining approximately 10.2% of the variance in anterior tongue strength, but not a significant predictor of posterior tongue strength. However, oropalatal dimensions predicted anterior tongue strength with over three times the predictive power of age alone (p < .001). Significant models for anterior tongue strength (R 2  = .457) and posterior tongue strength (R 2  = .283) included a combination of demographic predictors (i.e., age and/or gender) and oropalatal dimensions. Palatal width, estimated tongue volume, and gender were significant predictors of posterior tongue strength (p < .001). Therefore, oropalatal dimensions may warrant consideration when accurately differentiating between pathological lingual weakness and healthy individual difference.

  19. Feasibility of tomotherapy to reduce normal lung and cardiac toxicity for distal esophageal cancer compared to three-dimensional radiotherapy.

    PubMed

    Nguyen, Nam P; Krafft, Shane P; Vinh-Hung, Vincent; Vos, Paul; Almeida, Fabio; Jang, Siyoung; Ceizyk, Misty; Desai, Anand; Davis, Rick; Hamilton, Russ; Modarresifar, Homayoun; Abraham, Dave; Smith-Raymond, Lexie

    2011-12-01

    To compare the effectiveness of tomotherapy and three-dimensional (3D) conformal radiotherapy to spare normal critical structures (spinal cord, lungs, and ventricles) from excessive radiation in patients with distal esophageal cancers. A retrospective dosimetric study of nine patients who had advanced gastro-esophageal (GE) junction cancer (7) or thoracic esophageal cancer (2) extending into the distal esophagus. Two plans were created for each of the patients. A three-dimensional plan was constructed with either three (anteroposterior, right posterior oblique, and left posterior oblique) or four (right anterior oblique, left anterior oblique, right posterior oblique, and left posterior oblique) fields. The second plan was for tomotherapy. Doses were 45 Gy to the PTV with an integrated boost of 5 Gy for tomotherapy. Mean lung dose was respectively 7.4 and 11.8 Gy (p=0.004) for tomotherapy and 3D plans. Corresponding values were 12.4 and 18.3 Gy (p=0.006) for cardiac ventricles. Maximum spinal cord dose was respectively 31.3 and 37.4 Gy (p < 0.007) for tomotherapy and 3D plans. Homogeneity index was two for both groups. Compared to 3D conformal radiotherapy, tomotherapy decreased significantly the amount of normal tissue irradiated and may reduce treatment toxicity for possible dose escalation in future prospective studies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  20. False Positive Probabilities for all Kepler Objects of Interest: 1284 Newly Validated Planets and 428 Likely False Positives

    NASA Astrophysics Data System (ADS)

    Morton, Timothy D.; Bryson, Stephen T.; Coughlin, Jeffrey L.; Rowe, Jason F.; Ravichandran, Ganesh; Petigura, Erik A.; Haas, Michael R.; Batalha, Natalie M.

    2016-05-01

    We present astrophysical false positive probability calculations for every Kepler Object of Interest (KOI)—the first large-scale demonstration of a fully automated transiting planet validation procedure. Out of 7056 KOIs, we determine that 1935 have probabilities <1% of being astrophysical false positives, and thus may be considered validated planets. Of these, 1284 have not yet been validated or confirmed by other methods. In addition, we identify 428 KOIs that are likely to be false positives, but have not yet been identified as such, though some of these may be a result of unidentified transit timing variations. A side product of these calculations is full stellar property posterior samplings for every host star, modeled as single, binary, and triple systems. These calculations use vespa, a publicly available Python package that is able to be easily applied to any transiting exoplanet candidate.

  1. A morphometric anatomical and comparative study of the foramen magnum region in a Greek population.

    PubMed

    Natsis, K; Piagkou, M; Skotsimara, G; Piagkos, G; Skandalakis, P

    2013-12-01

    The foramen magnum (FM), a complex area in craniocervical surgery, poses a challenge for neurosurgeons. The knowledge of the detailed anatomy of the FM, occipital condyles (OC) and variations of the region is crucial for the safety of vital structures. This study focuses on the FM and OC morphometry, highlights anatomical variability and investigates correlations between the parameters studied. One hundred and forty-three Greek adult dry skulls were examined using a digital sliding calliper (accuracy, 0.01 mm). Mean FM width and length were found 30.31 ± 2.79 and 35.53 ± 3.06 mm, respectively. The commonest FM shape was two semicircles (25.9 %), whereas the most unusual was irregular (0.7 %). The OC minimum width, maximum width and length were 5.71 ± 1.61, 13.09 ± 1.99 and 25.60 ± 2.91 mm on the right, and 6.25 ± 1.76, 13.01 ± 1.98 and 25.60 ± 2.70 mm on the left side. The commonest OC shape was S-like and the most unusual was ring, bilaterally. The mean anterior and posterior intercondylar distances were 19.30 ± 3.25 and 51.61 ± 5.01 mm, respectively. The OC protruded into the FM in 86.7 % of the skulls. Variations such as a third OC existed in 5.6 % and basilar processes in 2.8 %. Posterior condylar foramina were present in 75.5 %. The gender was correlated with FM width and length, OC length, bilaterally, anterior intercondylar distance (AID) and posterior intercondylar distance (PID). The OC protrusion and existence of posterior condylar foramina were correlated. Bilateral asymmetry for OC shape was statistically significant. Our results provide useful information that will enable effective and reliable surgical intervention in the FM region with the maximum safety and widest possible exposure.

  2. Classic Maximum Entropy Recovery of the Average Joint Distribution of Apparent FRET Efficiency and Fluorescence Photons for Single-molecule Burst Measurements

    PubMed Central

    DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.

    2012-01-01

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions. PMID:22338694

  3. Asymmetry and anisotropy of surface effects of mining induced seismic events

    NASA Astrophysics Data System (ADS)

    Lasocki, Stanislaw; Orlecka-Sikora, Beata

    2013-04-01

    Long-lasting exploitation in underground mines and the complex system of goaf - unmined areas - excavation may cause the occurrence of seismic events, whose influence in the excavation and on the free surface is untypical. We present here the analysis of surface effects of a series of ten seismic events that occurred in one panel of a copper-ore mine. The analysis bases on a comparison of the observed ground motion due to the studied events with the estimates from Ground Motion Prediction Equations for peak horizontal (PHA) and vertical (PVA) acceleration of motion in the frequency band up to 10Hz, local for that mining area. The GMPE-s take into account also relative site amplification factors. The posterior probabilities that the observed PHA-s are not attained according to GMPE-s are calculated and mapped. Although all ten considered events had comparable magnitudes and were located close one to another their ground effects were very diverse. The analysis of anomalies of surface effects shows strong asymmetry of ground motion propagation and anisotropy of surface effects of the studied tremors. Based on similarities of surface effects anomalies, expressed in terms of the posterior probabilities, the events are split into distinct groups. In case of four events the actual PHA-s on most of the stations are greater than the respective estimated medians, especially in the sector N-SE. The PHA values of the second group are at short epicentral distances mostly on the same level as the predicted estimates from GMPE. The observed effects, however, become abnormally strong with the increase of epicentral distances in the sector NE-SE. The effects of events from next groups abnormally increase either in NE or NE - SE direction and the maximum anomalies appear about 3km from the epicenter. The extreme discrepancies can be attributed neither to local site effects nor to preferential propagation conditions along some wavepaths. Therefore it is concluded that the observed anomalies of ground motion result from sources properties. Integrated analysis of source mechanism of these events indicates that their untypical and diverse surface effects result from complexity of their sources expressed by tensile source mechanisms, finite sources, directivity of ruptures and nearly horizontal rupture planes. The above features seem to be implied by a superposition of coseismic alterations of stress field and stress changes due to mining. This work has been done in the framework of the research project No. NN525393539, financed by the National Science Centre of Poland for the period 2010-2013.

  4. WE-E-BRE-05: Ensemble of Graphical Models for Predicting Radiation Pneumontis Risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S; Ybarra, N; Jeyaseelan, K

    Purpose: We propose a prior knowledge-based approach to construct an interaction graph of biological and dosimetric radiation pneumontis (RP) covariates for the purpose of developing a RP risk classifier. Methods: We recruited 59 NSCLC patients who received curative radiotherapy with minimum 6 month follow-up. 16 RP events was observed (CTCAE grade ≥2). Blood serum was collected from every patient before (pre-RT) and during RT (mid-RT). From each sample the concentration of the following five candidate biomarkers were taken as covariates: alpha-2-macroglobulin (α2M), angiotensin converting enzyme (ACE), transforming growth factor β (TGF-β), interleukin-6 (IL-6), and osteopontin (OPN). Dose-volumetric parameters were alsomore » included as covariates. The number of biological and dosimetric covariates was reduced by a variable selection scheme implemented by L1-regularized logistic regression (LASSO). Posterior probability distribution of interaction graphs between the selected variables was estimated from the data under the literature-based prior knowledge to weight more heavily the graphs that contain the expected associations. A graph ensemble was formed by averaging the most probable graphs weighted by their posterior, creating a Bayesian Network (BN)-based RP risk classifier. Results: The LASSO selected the following 7 RP covariates: (1) pre-RT concentration level of α2M, (2) α2M level mid- RT/pre-RT, (3) pre-RT IL6 level, (4) IL6 level mid-RT/pre-RT, (5) ACE mid-RT/pre-RT, (6) PTV volume, and (7) mean lung dose (MLD). The ensemble BN model achieved the maximum sensitivity/specificity of 81%/84% and outperformed univariate dosimetric predictors as shown by larger AUC values (0.78∼0.81) compared with MLD (0.61), V20 (0.65) and V30 (0.70). The ensembles obtained by incorporating the prior knowledge improved classification performance for the ensemble size 5∼50. Conclusion: We demonstrated a probabilistic ensemble method to detect robust associations between RP covariates and its potential to improve RP prediction accuracy. Our Bayesian approach to incorporate prior knowledge can enhance efficiency in searching of such associations from data. The authors acknowledge partial support by: 1) CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290) and 2) The Terry Fox Foundation Strategic Training Initiative for Excellence in Radiation Research for the 21st Century (EIRR21)« less

  5. Objectified quantification of uncertainties in Bayesian atmospheric inversions

    NASA Astrophysics Data System (ADS)

    Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.

    2015-05-01

    Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system simulation experiments are carried out with different transport patterns, flux distributions and total prior amounts of emitted methane. The method proves to consistently reproduce the known "truth" in most cases, with satisfactory tolerance intervals. Additionally, the method explicitly provides influence scores and posterior correlation matrices. An in-depth interpretation of the inversion results is then possible. The more objective quantification of the influence of the observations on the fluxes proposed here allows us to evaluate the impact of the observation network on the characterization of the surface fluxes. The explicit correlations between emission aggregates reveal the mis-separated regions, hence the typical temporal and spatial scales the inversion can analyse. These scales are consistent with the chosen aggregation patterns.

  6. Dynamic analysis of pedestrian crossing behaviors on traffic flow at unsignalized mid-block crosswalks

    NASA Astrophysics Data System (ADS)

    Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping

    2015-05-01

    It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.

  7. [Occiput posterior presentation at delivery: Materno-foetal outcomes and predictive factors of rotation].

    PubMed

    Othenin-Girard, V; Boulvain, M; Guittier, M-J

    2018-02-01

    To describe the maternal and foetal outcomes of an occiput posterior foetal position at delivery; to evaluate predictive factors of anterior rotation during labour. Descriptive retrospective analysis of a cohort of 439 women with foetuses in occiput posterior position during labour. Logistic regression analysis to quantify the effect of factors that may favour anterior rotation. Most of foetuses (64%) do an anterior rotation during labour and 13% during the expulsive phase. The consequences of a persistent foetal occiput posterior position during delivery are a significantly increased average time of second stage labour compared to others positions (65.19minutes vs. 43.29, P=0.001, respectively); a higher percentage of caesarean sections (72.0% versus 4.7%, P<0.001) and instrumental delivery (among low-birth deliveries, 60.7% versus 25.2%, P<0.001); more frequent third-degree perineal tears (14.3% vs. 0.6%, P<0.001) and more abundant blood loss (560mL versus 344mL, P<0.001). In a multi-variable model including nulliparity, station of the presenting part and degree of flexion of the foetal head at complete dilatation, the only predictive factor independent of rotation at delivery is a good flexion of the foetal head at complete dilatation, which multiplies the anterior rotation probability by six. A good flexion of the foetal head is significantly associated with anterior rotation. Other studies exploring ways to increase anterior rotation during labour are needed to reduce the very high risk of caesarean section and instrumentation associated with the foetal occiput posterior position. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  8. Magnetic resonance spectroscopy and brain volumetry in mild cognitive impairment. A prospective study.

    PubMed

    Fayed, Nicolás; Modrego, Pedro J; García-Martí, Gracián; Sanz-Requena, Roberto; Marti-Bonmatí, Luis

    2017-05-01

    To assess the accuracy of magnetic resonance spectroscopy (1H-MRS) and brain volumetry in mild cognitive impairment (MCI) to predict conversion to probable Alzheimer's disease (AD). Forty-eight patients fulfilling the criteria of amnestic MCI who underwent a conventional magnetic resonance imaging (MRI) followed by MRS, and T1-3D on 1.5 Tesla MR unit. At baseline the patients underwent neuropsychological examination. 1H-MRS of the brain was carried out by exploring the left medial occipital lobe and ventral posterior cingulated cortex (vPCC) using the LCModel software. A high resolution T1-3D sequence was acquired to carry out the volumetric measurement. A cortical and subcortical parcellation strategy was used to obtain the volumes of each area within the brain. The patients were followed up to detect conversion to probable AD. After a 3-year follow-up, 15 (31.2%) patients converted to AD. The myo-inositol in the occipital cortex and glutamate+glutamine (Glx) in the posterior cingulate cortex predicted conversion to probable AD at 46.1% sensitivity and 90.6% specificity. The positive predictive value was 66.7%, and the negative predictive value was 80.6%, with an overall cross-validated classification accuracy of 77.8%. The volume of the third ventricle, the total white matter and entorhinal cortex predict conversion to probable AD at 46.7% sensitivity and 90.9% specificity. The positive predictive value was 70%, and the negative predictive value was 78.9%, with an overall cross-validated classification accuracy of 77.1%. Combining volumetric measures in addition to the MRS measures the prediction to probable AD has a 38.5% sensitivity and 87.5% specificity, with a positive predictive value of 55.6%, a negative predictive value of 77.8% and an overall accuracy of 73.3%. Either MRS or brain volumetric measures are markers separately of cognitive decline and may serve as a noninvasive tool to monitor cognitive changes and progression to dementia in patients with amnestic MCI, but the results do not support the routine use in the clinical settings. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Variational Gaussian approximation for Poisson data

    NASA Astrophysics Data System (ADS)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  10. Head sensory organs of Dactylopodola baltica (Macrodasyida, Gastrotricha): a combination of transmission electron microscopical and immunocytochemical techniques.

    PubMed

    Liesenjohann, Thilo; Neuhaus, Birger; Schmidt-Rhaesa, Andreas

    2006-08-01

    The anterior and posterior head sensory organs of Dactylopodola baltica (Macrodasyida, Gastrotricha) were investigated by transmission electron microscopy (TEM). In addition, whole individuals were labeled with phalloidin to mark F-actin and with anti-alpha-tubulin antibodies to mark microtubuli and studied with confocal laser scanning microscopy. Immunocytochemistry reveals that the large number of ciliary processes in the anterior head sensory organ contain F-actin; no signal could be detected for alpha-tubulin. Labeling with anti-alpha-tubulin antibodies revealed that the anterior and posterior head sensory organs are innervated by a common stem of nerves from the lateral nerve cords just anterior of the dorsal brain commissure. TEM studies showed that the anterior head sensory organ is composed of one sheath cell and one sensory cell with a single branching cilium that possesses a basal inflated part and regularly arranged ciliary processes. Each ciliary process contains one central microtubule. The posterior head sensory organ consists of at least one pigmented sheath cell and several probably monociliary sensory cells. Each cilium branches into irregularly arranged ciliary processes. These characters are assumed to belong to the ground pattern of the Gastrotricha. Copyright 2006 Wiley-Liss, Inc.

  11. A Deterministic Approach to Active Debris Removal Target Selection

    NASA Astrophysics Data System (ADS)

    Lidtke, A.; Lewis, H.; Armellin, R.

    2014-09-01

    Many decisions, with widespread economic, political and legal consequences, are being considered based on space debris simulations that show that Active Debris Removal (ADR) may be necessary as the concerns about the sustainability of spaceflight are increasing. The debris environment predictions are based on low-accuracy ephemerides and propagators. This raises doubts about the accuracy of those prognoses themselves but also the potential ADR target-lists that are produced. Target selection is considered highly important as removal of many objects will increase the overall mission cost. Selecting the most-likely candidates as soon as possible would be desirable as it would enable accurate mission design and allow thorough evaluation of in-orbit validations, which are likely to occur in the near-future, before any large investments are made and implementations realized. One of the primary factors that should be used in ADR target selection is the accumulated collision probability of every object. A conjunction detection algorithm, based on the smart sieve method, has been developed. Another algorithm is then applied to the found conjunctions to compute the maximum and true probabilities of collisions taking place. The entire framework has been verified against the Conjunction Analysis Tools in AGIs Systems Toolkit and relative probability error smaller than 1.5% has been achieved in the final maximum collision probability. Two target-lists are produced based on the ranking of the objects according to the probability they will take part in any collision over the simulated time window. These probabilities are computed using the maximum probability approach, that is time-invariant, and estimates of the true collision probability that were computed with covariance information. The top-priority targets are compared, and the impacts of the data accuracy and its decay are highlighted. General conclusions regarding the importance of Space Surveillance and Tracking for the purpose of ADR are also drawn and a deterministic method for ADR target selection, which could reduce the number of ADR missions to be performed, is proposed.

  12. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates basedmore » on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.« less

  13. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  14. Probability of stress-corrosion fracture under random loading

    NASA Technical Reports Server (NTRS)

    Yang, J. N.

    1974-01-01

    Mathematical formulation is based on cumulative-damage hypothesis and experimentally-determined stress-corrosion characteristics. Under both stationary random loadings, mean value and variance of cumulative damage are obtained. Probability of stress-corrosion fracture is then evaluated, using principle of maximum entropy.

  15. STOCHASTIC DUELS WITH HOMING,

    DTIC Science & Technology

    Duels where both marksmen ’home’ or ’zero in’ on one another are here considered, and the effect of this on the win probability is determined. It is...leads to win probabilities that can be straightforwardly evaluated. Maximum-likelihood estimation of the hit probability and homing from field data is outlined. The solutions of the duels are displayed as contour maps. (Author)

  16. The Safety and Feasibility of Three-Dimensional Visualization Technology Assisted Right Posterior Lobe Allied with Part of V and VIII Sectionectomy for Right Hepatic Malignancy Therapy.

    PubMed

    Hu, Min; Hu, Haoyu; Cai, Wei; Mo, Zhikang; Xiang, Nan; Yang, Jian; Fang, Chihua

    2018-05-01

    Hepatectomy is the optimal method for liver cancer; the virtual liver resection based on three-dimensional visualization technology (3-DVT) could provide better preoperative strategy for surgeon. We aim to introduce right posterior lobe allied with part of V and VIII sectionectomy assisted by 3-DVT as a promising treatment for massive or multiple right hepatic malignancies to retain maximum residual liver volume on the basis of R0 resection. Among 126 consecutive patients who underwent hepatectomy, 9 (7%) underwent right posterior lobe allied with part of V and VIII sectionectomy. 21 (17%) underwent right hemihepatectomy (RH). The virtual RH was performed with 3-DVT, which provided better observation of spatial position relationship between tumor and vessels, and the more accurate estimation of the remnant liver volume. If remnant liver volume was <40%, right posterior lobe allied with part of V and VIII sectionectomy should be undergone. Then, the precut line ought to be planned on the basis of protecting the portal branch of subsegment 5 and 8. The postoperative outcome of patients was compared before and after propensity score matching. Nine patients meeting the eligibility criteria received right posterior lobe allied with part of V and VIII sectionectomy. The variables, including the overall mean operation time, blood transfusion, operation length, liver function, and postoperative complications, were similar between two groups before and after propensity matching. The postoperative first, third, fifth, and seventh days mean value of aspartate aminotransferase (AST), alanine aminotransferase (ALT), albumin (ALB), and total bilirubin had no significant difference compared with preoperative value. One patient in each group had recurrence six months after surgery. Right posterior lobe allied with part of V and VIII sectionectomy based on 3-DVT is safe and feasible surgery way, and can be a very promising method in massive or multiple right hepatic malignancy therapy.

  17. Correlation of Acute and Late Brainstem Toxicities With Dose-Volume Data for Pediatric Patients With Posterior Fossa Malignancies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nanda, Ronica H., E-mail: rhazari@emory.edu; Ganju, Rohit G.; Schreibmann, Edward

    Purpose: Radiation-induced brainstem toxicity after treatment of pediatric posterior fossa malignancies is incompletely understood, especially in the era of intensity modulated radiation therapy (IMRT). The rates of, and predictive factors for, brainstem toxicity after photon RT for posterior fossa tumors were examined. Methods and Materials: After institutional review board approval, 60 pediatric patients treated at our institution for nonmetastatic infratentorial ependymoma and medulloblastoma with IMRT were included in the present analysis. Dosimetric variables, including the mean and maximum dose to the brainstem, the dose to 10% to 90% of the brainstem (in 10% increments), and the volume of the brainstemmore » receiving 40, 45, 50, and 55 Gy were recorded for each patient. Acute (onset within 3 months) and late (>3 months of RT completion) RT-induced brainstem toxicities with clinical and radiographic correlates were scored using Common Terminology Criteria for Adverse Events, version 4.0. Results: Patients aged 1.4 to 21.8 years underwent IMRT or volumetric arc therapy postoperatively to the posterior fossa or tumor bed. At a median clinical follow-up period of 2.8 years, 14 patients had developed symptomatic brainstem toxicity (crude incidence 23.3%). No correlation was found between the dosimetric variables examined and brainstem toxicity. Vascular injury or ischemia showed a strong trend toward predicting brainstem toxicity (P=.054). Patients with grade 3 to 5 brainstem toxicity had undergone treatment to significant volumes of the posterior fossa. Conclusion: The results of the present series demonstrate a low, but not negligible, risk of brainstem radiation necrosis for pediatric patients with posterior fossa malignancies treated with IMRT. No specific dose-volume correlations were identified; however, modern treatment volumes might help limit the incidence of severe toxicity. Additional work investigating inherent biologic sensitivity might also provide further insight into this clinical problem.« less

  18. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  19. Optimal Post-Operative Immobilisation for Supracondylar Humeral Fractures.

    PubMed

    Azzolin, Lucas; Angelliaume, Audrey; Harper, Luke; Lalioui, Abdelfettah; Delgove, Anaïs; Lefèvre, Yan

    2018-05-25

    Supracondylar humeral fractures (SCHFs) are very common in paediatric patients. In France, percutaneous fixation with two lateral-entry pins is widely used after successful closed reduction. Post-operative immobilisation is typically with a long arm cast combined with a tubular-bandage sling that immobilises the shoulder and holds the arm in adduction and internal rotation to prevent external rotation of the shoulder, which might cause secondary displacement. The objective of this study was to compare this standard immobilisation technique to a posterior plaster splint with a simple sling. Secondary displacement is not more common with a posterior plaster splint and sling than with a long arm cast. 100 patients with extension Gartland type III SCHFs managed by closed reduction and percutaneous fixation with two lateral-entry pins between December 2011 and December 2015 were assessed retrospectively. Post-operative immobilisation was with a posterior plaster splint and a simple sling worn for 4 weeks. Radiographs were obtained on days 1, 45, and 90. Secondary displacement occurred in 8% of patients. No patient required revision surgery. The secondary displacement rate was comparable to earlier reports. Of the 8 secondary displacements, 5 were ascribable to technical errors. The remaining 3 were not caused by rotation of the arm and would probably not have been prevented by using the tubular-bandage sling. A posterior plaster splint combined with a simple sling is a simple and effective immobilisation method for SCHFs provided internal fixation is technically optimal. IV, retrospective observational study. Copyright © 2018. Published by Elsevier Masson SAS.

  20. Periodontal ligament injection versus routine local infiltration for nonsurgical single posterior maxillary permanent tooth extraction: comparative double-blinded randomized clinical study.

    PubMed

    Al-Shayyab, Mohammad H

    2017-01-01

    The aim of this study was to evaluate the efficacy of, and patients' subjective responses to, periodontal ligament (PDL) anesthetic injection compared to traditional local-anesthetic infiltration injection for the nonsurgical extraction of one posterior maxillary permanent tooth. All patients scheduled for nonsurgical symmetrical maxillary posterior permanent tooth extraction in the Department of Oral and Maxillofacial Surgery at the University of Jordan Hospital, Amman, Jordan over a 7-month period were invited to participate in this prospective randomized double-blinded split-mouth study. Every patient received the recommended volume of 2% lidocaine with 1:100,000 epinephrine for PDL injection on the experimental side and for local infiltration on the control side. A visual analog scale (VAS) and verbal rating scale (VRS) were used to describe pain felt during injection and extraction, respectively. Statistical significance was based on probability values <0.05 and measured using χ 2 and Student t -tests and nonparametric Mann-Whitney and Kruskal-Wallis tests. Of the 73 patients eligible for this study, 55 met the inclusion criteria: 32 males and 23 females, with a mean age of 34.87±14.93 years. Differences in VAS scores and VRS data between the two techniques were statistically significant ( P <0.001) and in favor of the infiltration injection. The PDL injection may not be the alternative anesthetic technique of choice to routine local infiltration for the nonsurgical extraction of one posterior maxillary permanent tooth.

  1. Delay Analysis and Optimization of Bandwidth Request under Unicast Polling in IEEE 802.16e over Gilbert-Elliot Error Channel

    NASA Astrophysics Data System (ADS)

    Hwang, Eunju; Kim, Kyung Jae; Roijers, Frank; Choi, Bong Dae

    In the centralized polling mode in IEEE 802.16e, a base station (BS) polls mobile stations (MSs) for bandwidth reservation in one of three polling modes; unicast, multicast, or broadcast pollings. In unicast polling, the BS polls each individual MS to allow to transmit a bandwidth request packet. This paper presents an analytical model for the unicast polling of bandwidth request in IEEE 802.16e networks over Gilbert-Elliot error channel. We derive the probability distribution for the delay of bandwidth requests due to wireless transmission errors and find the loss probability of request packets due to finite retransmission attempts. By using the delay distribution and the loss probability, we optimize the number of polling slots within a frame and the maximum retransmission number while satisfying QoS on the total loss probability which combines two losses: packet loss due to the excess of maximum retransmission and delay outage loss due to the maximum tolerable delay bound. In addition, we obtain the utilization of polling slots, which is defined as the ratio of the number of polling slots used for the MS's successful transmission to the total number of polling slots used by the MS over a long run time. Analysis results are shown to well match with simulation results. Numerical results give examples of the optimal number of polling slots within a frame and the optimal maximum retransmission number depending on delay bounds, the number of MSs, and the channel conditions.

  2. Evaluation of probable maximum snow accumulation: Development of a methodology for climate change studies

    NASA Astrophysics Data System (ADS)

    Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick

    2016-06-01

    Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.

  3. Distribution of the anterior, posterior, and total corneal astigmatism in healthy eyes.

    PubMed

    Feizi, Sepehr; Naderan, Mohammad; Ownagh, Vahid; Sadeghpour, Fatemeh

    2018-04-01

    To evaluate the magnitude and axis orientation of the anterior, posterior, and total corneal astigmatism in normal healthy eyes of an Iranian population. In a prospective cross-sectional study, ophthalmic and anterior segment parameters of 153 healthy eyes of 153 subjects were evaluated by Galilei dual Scheimpflug analyzer. The magnitude and axis orientation [with-the-rule (WTR), against-the-rule (ATR), and oblique] of the anterior, posterior, and total corneal astigmatism measurements (ACA, PCA, and TCA) were compared according to the age, sex, and other ophthalmic parameters. The mean ± SD age of the study population was 30 ± 5.9 years. The mean magnitude was 1.09 ± 0.76 diopters (D) for ACA, 0.30 ± 0.13 D for PCA, and 1.08 ± 0.77 D for TCA. Males had a significantly higher magnitude of PCA than females (p = 0.041). Most eyes had a WTR anterior astigmatism and an ATR posterior astigmatism. The WTR astigmatism had a higher mean magnitude compared to the ATR and oblique astigmatism in all the astigmatism groups, with a significant difference in the ACA and TCA groups (p < 0.05). PCA magnitude exceeded 0.50 D in only 7.8% of the subjects. ACA, PCA, and TCA were significantly correlated with each other and also had a significant correlation with the anterior and posterior maximum corneal elevation measurements (p < 0.001). The results of this study although are limited due to the small number of participants and confined to our demographics, provided information regarding a population that was not described before and may be helpful in obtaining optimum results in astigmatism correction in refractive surgery or designing new intraocular lenses.

  4. The Effect of Graft Strength on Knee Laxity and Graft In-Situ Forces after Posterior Cruciate Ligament Reconstruction

    PubMed Central

    Lai, Yu-Shu; Chen, Wen-Chuan; Huang, Chang-Hung; Cheng, Cheng-Kung; Chan, Kam-Kong; Chang, Ting-Kuo

    2015-01-01

    Surgical reconstruction is generally recommended for posterior cruciate ligament (PCL) injuries; however, the use of grafts is still a controversial problem. In this study, a three-dimensional finite element model of the human tibiofemoral joint with articular cartilage layers, menisci, and four main ligaments was constructed to investigate the effects of graft strengths on knee kinematics and in-situ forces of PCL grafts. Nine different graft strengths with stiffness ranging from 0% (PCL rupture) to 200%, in increments of 25%, of an intact PCL’s strength were used to simulate the PCL reconstruction. A 100 N posterior tibial drawer load was applied to the knee joint at full extension. Results revealed that the maximum posterior translation of the PCL rupture model (0% stiffness) was 6.77 mm in the medial compartment, which resulted in tibial internal rotation of about 3.01°. After PCL reconstruction with any graft strength, the laxity of the medial tibial compartment was noticeably improved. Tibial translation and rotation were similar to the intact knee after PCL reconstruction with graft strengths ranging from 75% to 125% of an intact PCL. When the graft’s strength surpassed 150%, the medial tibia moved forward and external tibial rotation greatly increased. The in-situ forces generated in the PCL grafts ranged from 13.15 N to 75.82 N, depending on the stiffness. In conclusion, the strength of PCL grafts have has a noticeable effect on anterior-posterior translation of the medial tibial compartment and its in-situ force. Similar kinematic response may happen in the models when the PCL graft’s strength lies between 75% and 125% of an intact PCL. PMID:26001045

  5. The effect of graft strength on knee laxity and graft in-situ forces after posterior cruciate ligament reconstruction.

    PubMed

    Lai, Yu-Shu; Chen, Wen-Chuan; Huang, Chang-Hung; Cheng, Cheng-Kung; Chan, Kam-Kong; Chang, Ting-Kuo

    2015-01-01

    Surgical reconstruction is generally recommended for posterior cruciate ligament (PCL) injuries; however, the use of grafts is still a controversial problem. In this study, a three-dimensional finite element model of the human tibiofemoral joint with articular cartilage layers, menisci, and four main ligaments was constructed to investigate the effects of graft strengths on knee kinematics and in-situ forces of PCL grafts. Nine different graft strengths with stiffness ranging from 0% (PCL rupture) to 200%, in increments of 25%, of an intact PCL's strength were used to simulate the PCL reconstruction. A 100 N posterior tibial drawer load was applied to the knee joint at full extension. Results revealed that the maximum posterior translation of the PCL rupture model (0% stiffness) was 6.77 mm in the medial compartment, which resulted in tibial internal rotation of about 3.01°. After PCL reconstruction with any graft strength, the laxity of the medial tibial compartment was noticeably improved. Tibial translation and rotation were similar to the intact knee after PCL reconstruction with graft strengths ranging from 75% to 125% of an intact PCL. When the graft's strength surpassed 150%, the medial tibia moved forward and external tibial rotation greatly increased. The in-situ forces generated in the PCL grafts ranged from 13.15 N to 75.82 N, depending on the stiffness. In conclusion, the strength of PCL grafts have has a noticeable effect on anterior-posterior translation of the medial tibial compartment and its in-situ force. Similar kinematic response may happen in the models when the PCL graft's strength lies between 75% and 125% of an intact PCL.

  6. Application of the Maximum Amplitude-Early Rise Correlation to Cycle 23

    NASA Technical Reports Server (NTRS)

    Willson, Robert M.; Hathaway, David H.

    2004-01-01

    On the basis of the maximum amplitude-early rise correlation, cycle 23 could have been predicted to be about the size of the mean cycle as early as 12 mo following cycle minimum. Indeed, estimates for the size of cycle 23 throughout its rise consistently suggested a maximum amplitude that would not differ appreciably from the mean cycle, contrary to predictions based on precursor information. Because cycle 23 s average slope during the rising portion of the solar cycle measured 2.4, computed as the difference between the conventional maximum (120.8) and minimum (8) amplitudes divided by the ascent duration in months (47), statistically speaking, it should be a cycle of shorter period. Hence, conventional sunspot minimum for cycle 24 should occur before December 2006, probably near July 2006 (+/-4 mo). However, if cycle 23 proves to be a statistical outlier, then conventional sunspot minimum for cycle 24 would be delayed until after July 2007, probably near December 2007 (+/-4 mo). In anticipation of cycle 24, a chart and table are provided for easy monitoring of the nearness and size of its maximum amplitude once onset has occurred (with respect to the mean cycle and using the updated maximum amplitude-early rise relationship).

  7. Bayesian anomaly detection in monitoring data applying relevance vector machine

    NASA Astrophysics Data System (ADS)

    Saito, Tomoo

    2011-04-01

    A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.

  8. The returns and risks of investment portfolio in stock market crashes

    NASA Astrophysics Data System (ADS)

    Li, Jiang-Cheng; Long, Chao; Chen, Xiao-Dan

    2015-06-01

    The returns and risks of investment portfolio in stock market crashes are investigated by considering a theoretical model, based on a modified Heston model with a cubic nonlinearity, proposed by Spagnolo and Valenti. Through numerically simulating probability density function of returns and the mean escape time of the model, the results indicate that: (i) the maximum stability of returns is associated with the maximum dispersion of investment portfolio and an optimal stop-loss position; (ii) the maximum risks are related with a worst dispersion of investment portfolio and the risks of investment portfolio are enhanced by increasing stop-loss position. In addition, the good agreements between the theoretical result and real market data are found in the behaviors of the probability density function and the mean escape time.

  9. A METHOD FOR DETERMINING THE RADIALLY-AVERAGED EFFECTIVE IMPACT AREA FOR AN AIRCRAFT CRASH INTO A STRUCTURE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, William C.

    This report presents a methodology for deriving the equations which can be used for calculating the radially-averaged effective impact area for a theoretical aircraft crash into a structure. Conventionally, a maximum effective impact area has been used in calculating the probability of an aircraft crash into a structure. Whereas the maximum effective impact area is specific to a single direction of flight, the radially-averaged effective impact area takes into consideration the real life random nature of the direction of flight with respect to a structure. Since the radially-averaged effective impact area is less than the maximum effective impact area, themore » resulting calculated probability of an aircraft crash into a structure is reduced.« less

  10. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less

  11. Bayesian parameter estimation for chiral effective field theory

    NASA Astrophysics Data System (ADS)

    Wesolowski, Sarah; Furnstahl, Richard; Phillips, Daniel; Klco, Natalie

    2016-09-01

    The low-energy constants (LECs) of a chiral effective field theory (EFT) interaction in the two-body sector are fit to observable data using a Bayesian parameter estimation framework. By using Bayesian prior probability distributions (pdfs), we quantify relevant physical expectations such as LEC naturalness and include them in the parameter estimation procedure. The final result is a posterior pdf for the LECs, which can be used to propagate uncertainty resulting from the fit to data to the final observable predictions. The posterior pdf also allows an empirical test of operator redundancy and other features of the potential. We compare results of our framework with other fitting procedures, interpreting the underlying assumptions in Bayesian probabilistic language. We also compare results from fitting all partial waves of the interaction simultaneously to cross section data compared to fitting to extracted phase shifts, appropriately accounting for correlations in the data. Supported in part by the NSF and DOE.

  12. Multiclass feature selection for improved pediatric brain tumor segmentation

    NASA Astrophysics Data System (ADS)

    Ahmed, Shaheen; Iftekharuddin, Khan M.

    2012-03-01

    In our previous work, we showed that fractal-based texture features are effective in detection, segmentation and classification of posterior-fossa (PF) pediatric brain tumor in multimodality MRI. We exploited an information theoretic approach such as Kullback-Leibler Divergence (KLD) for feature selection and ranking different texture features. We further incorporated the feature selection technique with segmentation method such as Expectation Maximization (EM) for segmentation of tumor T and non tumor (NT) tissues. In this work, we extend the two class KLD technique to multiclass for effectively selecting the best features for brain tumor (T), cyst (C) and non tumor (NT). We further obtain segmentation robustness for each tissue types by computing Bay's posterior probabilities and corresponding number of pixels for each tissue segments in MRI patient images. We evaluate improved tumor segmentation robustness using different similarity metric for 5 patients in T1, T2 and FLAIR modalities.

  13. Brain Metabolic Dysfunction in Capgras Delusion During Alzheimer's Disease: A Positron Emission Tomography Study.

    PubMed

    Jedidi, H; Daury, N; Capa, R; Bahri, M A; Collette, F; Feyers, D; Bastin, C; Maquet, P; Salmon, E

    2015-11-01

    Capgras delusion is characterized by the misidentification of people and by the delusional belief that the misidentified persons have been replaced by impostors, generally perceived as persecutors. Since little is known regarding the neural correlates of Capgras syndrome, the cerebral metabolic pattern of a patient with probable Alzheimer's disease (AD) and Capgras syndrome was compared with those of 24-healthy elderly participants and 26 patients with AD without delusional syndrome. Comparing the healthy group with the AD group, the patient with AD had significant hypometabolism in frontal and posterior midline structures. In the light of current neural models of face perception, our patients with Capgras syndrome may be related to impaired recognition of a familiar face, subserved by the posterior cingulate/precuneus cortex, and impaired reflection about personally relevant knowledge related to a face, subserved by the dorsomedial prefrontal cortex. © The Author(s) 2013.

  14. Modeling Dynamic Contrast-Enhanced MRI Data with a Constrained Local AIF.

    PubMed

    Duan, Chong; Kallehauge, Jesper F; Pérez-Torres, Carlos J; Bretthorst, G Larry; Beeman, Scott C; Tanderup, Kari; Ackerman, Joseph J H; Garbow, Joel R

    2018-02-01

    This study aims to develop a constrained local arterial input function (cL-AIF) to improve quantitative analysis of dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) data by accounting for the contrast-agent bolus amplitude error in the voxel-specific AIF. Bayesian probability theory-based parameter estimation and model selection were used to compare tracer kinetic modeling employing either the measured remote-AIF (R-AIF, i.e., the traditional approach) or an inferred cL-AIF against both in silico DCE-MRI data and clinical, cervical cancer DCE-MRI data. When the data model included the cL-AIF, tracer kinetic parameters were correctly estimated from in silico data under contrast-to-noise conditions typical of clinical DCE-MRI experiments. Considering the clinical cervical cancer data, Bayesian model selection was performed for all tumor voxels of the 16 patients (35,602 voxels in total). Among those voxels, a tracer kinetic model that employed the voxel-specific cL-AIF was preferred (i.e., had a higher posterior probability) in 80 % of the voxels compared to the direct use of a single R-AIF. Maps of spatial variation in voxel-specific AIF bolus amplitude and arrival time for heterogeneous tissues, such as cervical cancer, are accessible with the cL-AIF approach. The cL-AIF method, which estimates unique local-AIF amplitude and arrival time for each voxel within the tissue of interest, provides better modeling of DCE-MRI data than the use of a single, measured R-AIF. The Bayesian-based data analysis described herein affords estimates of uncertainties for each model parameter, via posterior probability density functions, and voxel-wise comparison across methods/models, via model selection in data modeling.

  15. Distribution of Marburg virus in Africa: An evolutionary approach.

    PubMed

    Zehender, Gianguglielmo; Sorrentino, Chiara; Veo, Carla; Fiaschi, Lisa; Gioffrè, Sonia; Ebranati, Erika; Tanzi, Elisabetta; Ciccozzi, Massimo; Lai, Alessia; Galli, Massimo

    2016-10-01

    The aim of this study was to investigate the origin and geographical dispersion of Marburg virus, the first member of the Filoviridae family to be discovered. Seventy-three complete genome sequences of Marburg virus isolated from animals and humans were retrieved from public databases and analysed using a Bayesian phylogeographical framework. The phylogenetic tree of the Marburg virus data set showed two significant evolutionary lineages: Ravn virus (RAVV) and Marburg virus (MARV). MARV divided into two main clades; clade A included isolates from Uganda (five from the European epidemic in 1967), Kenya (1980) and Angola (from the epidemic of 2004-2005); clade B included most of the isolates obtained during the 1999-2000 epidemic in the Democratic Republic of the Congo (DRC) and a group of Ugandan isolates obtained in 2007-2009. The estimated mean evolutionary rate of the whole genome was 3.3×10(-4) substitutions/site/year (credibility interval 2.0-4.8). The MARV strain had a mean root time of the most recent common ancestor of 177.9years ago (YA) (95% highest posterior density 87-284), thus indicating that it probably originated in the mid-XIX century, whereas the RAVV strain had a later origin dating back to a mean 33.8 YA. The most probable location of the MARV ancestor was Uganda (state posterior probability, spp=0.41), whereas that of the RAVV ancestor was Kenya (spp=0.71). There were significant migration rates from Uganda to the DRC (Bayes Factor, BF=42.0) and in the opposite direction (BF=5.7). Our data suggest that Uganda may have been the cradle of Marburg virus in Africa. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. The evolutionary history of vertebrate cranial placodes II. Evolution of ectodermal patterning.

    PubMed

    Schlosser, Gerhard; Patthey, Cedric; Shimeld, Sebastian M

    2014-05-01

    Cranial placodes are evolutionary innovations of vertebrates. However, they most likely evolved by redeployment, rewiring and diversification of preexisting cell types and patterning mechanisms. In the second part of this review we compare vertebrates with other animal groups to elucidate the evolutionary history of ectodermal patterning. We show that several transcription factors have ancient bilaterian roles in dorsoventral and anteroposterior regionalisation of the ectoderm. Evidence from amphioxus suggests that ancestral chordates then concentrated neurosecretory cells in the anteriormost non-neural ectoderm. This anterior proto-placodal domain subsequently gave rise to the oral siphon primordia in tunicates (with neurosecretory cells being lost) and anterior (adenohypophyseal, olfactory, and lens) placodes of vertebrates. Likewise, tunicate atrial siphon primordia and posterior (otic, lateral line, and epibranchial) placodes of vertebrates probably evolved from a posterior proto-placodal region in the tunicate-vertebrate ancestor. Since both siphon primordia in tunicates give rise to sparse populations of sensory cells, both proto-placodal domains probably also gave rise to some sensory receptors in the tunicate-vertebrate ancestor. However, proper cranial placodes, which give rise to high density arrays of specialised sensory receptors and neurons, evolved from these domains only in the vertebrate lineage. We propose that this may have involved rewiring of the regulatory network upstream and downstream of Six1/2 and Six4/5 transcription factors and their Eya family cofactors. These proteins, which play ancient roles in neuronal differentiation were first recruited to the dorsal non-neural ectoderm in the tunicate-vertebrate ancestor but subsequently probably acquired new target genes in the vertebrate lineage, allowing them to adopt new functions in regulating proliferation and patterning of neuronal progenitors. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Post-hoc simulation study to adopt a computerized adaptive testing (CAT) for a Korean Medical License Examination.

    PubMed

    Seo, Dong Gi; Choi, Jeongwook

    2018-05-17

    Computerized adaptive testing (CAT) has been adopted in license examinations due to a test efficiency and accuracy. Many research about CAT have been published to prove the efficiency and accuracy of measurement. This simulation study investigated scoring method and item selection methods to implement CAT in Korean medical license examination (KMLE). This study used post-hoc (real data) simulation design. The item bank used in this study was designed with all items in a 2017 KMLE. All CAT algorithms for this study were implemented by a 'catR' package in R program. In terms of accuracy, Rasch and 2parametric logistic (PL) model performed better than 3PL model. Modal a Posteriori (MAP) or Expected a Posterior (EAP) provided more accurate estimates than MLE and WLE. Furthermore Maximum posterior weighted information (MPWI) or Minimum expected posterior variance (MEPV) performed better than other item selection methods. In terms of efficiency, Rasch model was recommended to reduce test length. Simulation study should be performed under varied test conditions before adopting a live CAT. Based on a simulation study, specific scoring and item selection methods should be predetermined before implementing a live CAT.

  18. Topography of acute stroke in a sample of 439 right brain damaged patients.

    PubMed

    Sperber, Christoph; Karnath, Hans-Otto

    2016-01-01

    Knowledge of the typical lesion topography and volumetry is important for clinical stroke diagnosis as well as for anatomo-behavioral lesion mapping analyses. Here we used modern lesion analysis techniques to examine the naturally occurring lesion patterns caused by ischemic and by hemorrhagic infarcts in a large, representative acute stroke patient sample. Acute MR and CT imaging of 439 consecutively admitted right-hemispheric stroke patients from a well-defined catchment area suffering from ischemia (n = 367) or hemorrhage (n = 72) were normalized and mapped in reference to stereotaxic anatomical atlases. For ischemic infarcts, highest frequencies of stroke were observed in the insula, putamen, operculum and superior temporal cortex, as well as the inferior and superior occipito-frontal fascicles, superior longitudinal fascicle, uncinate fascicle, and the acoustic radiation. The maximum overlay of hemorrhages was located more posteriorly and more medially, involving posterior areas of the insula, Heschl's gyrus, and putamen. Lesion size was largest in frontal and anterior areas and lowest in subcortical and posterior areas. The large and unbiased sample of stroke patients used in the present study accumulated the different sub-patterns to identify the global topographic and volumetric pattern of right hemisphere stroke in humans.

  19. Estimating the periodic components of a biomedical signal through inverse problem modelling and Bayesian inference with sparsity enforcing prior

    NASA Astrophysics Data System (ADS)

    Dumitru, Mircea; Djafari, Ali-Mohammad

    2015-01-01

    The recent developments in chronobiology need a periodic components variation analysis for the signals expressing the biological rhythms. A precise estimation of the periodic components vector is required. The classical approaches, based on FFT methods, are inefficient considering the particularities of the data (short length). In this paper we propose a new method, using the sparsity prior information (reduced number of non-zero values components). The considered law is the Student-t distribution, viewed as a marginal distribution of a Infinite Gaussian Scale Mixture (IGSM) defined via a hidden variable representing the inverse variances and modelled as a Gamma Distribution. The hyperparameters are modelled using the conjugate priors, i.e. using Inverse Gamma Distributions. The expression of the joint posterior law of the unknown periodic components vector, hidden variables and hyperparameters is obtained and then the unknowns are estimated via Joint Maximum A Posteriori (JMAP) and Posterior Mean (PM). For the PM estimator, the expression of the posterior law is approximated by a separable one, via the Bayesian Variational Approximation (BVA), using the Kullback-Leibler (KL) divergence. Finally we show the results on synthetic data in cancer treatment applications.

  20. Evaluation of hip fracture risk in relation to fall direction.

    PubMed

    Nankaku, Manabu; Kanzaki, Hideto; Tsuboyama, Tadao; Nakamura, Takashi

    2005-11-01

    The purpose of this study is to evaluate hip fracture risk in relation to fall direction, and to elucidate factors that influence the impact force in falls on the hip. Eight healthy volunteers performed deliberate falls in three directions (lateral, posterolateral and posterior) on a force platform covered by a mattress of 13 cm thickness. Fall descent motions and impact postures were examined by a three-dimensional analyzer. The maximum ground force reaction, velocity of the greater trochanter at impact, and activity of quadriceps and gluteus medius were measured. In all trials of lateral and posterolateral falls, but not of posterior falls, the subjects hit their greater trochanter directly on the mattress. The impact forces were between 2,000 N and 4,000 N. Posterolateral falls showed significantly higher velocity at impact than did posterior falls. The height and the lower limb length exhibited positive correlations with the impact force in all directions of fall. In the lateral fall, there was a positive correlation between the activity of quadriceps and the impact force. In view of the impact point, force, and velocity, the posterolateral fall seemed to carry the highest risk of hip fracture.

  1. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sigeti, David E.; Pelak, Robert A.

    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis withmore » an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a general beta-function prior for {theta}, enabling sequential analysis in which a small number of new simulations may be done and the resulting posterior for {theta} used as a prior to inform the next stage of power analysis.« less

  3. Identification of BRCA1 missense substitutions that confer partial functional activity: potential moderate risk variants?

    PubMed

    Lovelock, Paul K; Spurdle, Amanda B; Mok, Myth T S; Farrugia, Daniel J; Lakhani, Sunil R; Healey, Sue; Arnold, Stephen; Buchanan, Daniel; Couch, Fergus J; Henderson, Beric R; Goldgar, David E; Tavtigian, Sean V; Chenevix-Trench, Georgia; Brown, Melissa A

    2007-01-01

    Many of the DNA sequence variants identified in the breast cancer susceptibility gene BRCA1 remain unclassified in terms of their potential pathogenicity. Both multifactorial likelihood analysis and functional approaches have been proposed as a means to elucidate likely clinical significance of such variants, but analysis of the comparative value of these methods for classifying all sequence variants has been limited. We have compared the results from multifactorial likelihood analysis with those from several functional analyses for the four BRCA1 sequence variants A1708E, G1738R, R1699Q, and A1708V. Our results show that multifactorial likelihood analysis, which incorporates sequence conservation, co-inheritance, segregation, and tumour immunohistochemical analysis, may improve classification of variants. For A1708E, previously shown to be functionally compromised, analysis of oestrogen receptor, cytokeratin 5/6, and cytokeratin 14 tumour expression data significantly strengthened the prediction of pathogenicity, giving a posterior probability of pathogenicity of 99%. For G1738R, shown to be functionally defective in this study, immunohistochemistry analysis confirmed previous findings of inconsistent 'BRCA1-like' phenotypes for the two tumours studied, and the posterior probability for this variant was 96%. The posterior probabilities of R1699Q and A1708V were 54% and 69%, respectively, only moderately suggestive of increased risk. Interestingly, results from functional analyses suggest that both of these variants have only partial functional activity. R1699Q was defective in foci formation in response to DNA damage and displayed intermediate transcriptional transactivation activity but showed no evidence for centrosome amplification. In contrast, A1708V displayed an intermediate transcriptional transactivation activity and a normal foci formation response in response to DNA damage but induced centrosome amplification. These data highlight the need for a range of functional studies to be performed in order to identify variants with partially compromised function. The results also raise the possibility that A1708V and R1699Q may be associated with a low or moderate risk of cancer. While data pooling strategies may provide more information for multifactorial analysis to improve the interpretation of the clinical significance of these variants, it is likely that the development of current multifactorial likelihood approaches and the consideration of alternative statistical approaches will be needed to determine whether these individually rare variants do confer a low or moderate risk of breast cancer.

  4. Bronchoscopic lung-volume reduction with Exhale airway stents for emphysema (EASE trial): randomised, sham-controlled, multicentre trial.

    PubMed

    Shah, P L; Slebos, D-J; Cardoso, P F G; Cetti, E; Voelker, K; Levine, B; Russell, M E; Goldin, J; Brown, M; Cooper, J D; Sybrecht, G W

    2011-09-10

    Airway bypass is a bronchoscopic lung-volume reduction procedure for emphysema whereby transbronchial passages into the lung are created to release trapped air, supported with paclitaxel-coated stents to ease the mechanics of breathing. The aim of the EASE (Exhale airway stents for emphysema) trial was to evaluate safety and efficacy of airway bypass in people with severe homogeneous emphysema. We undertook a randomised, double-blind, sham-controlled study in 38 specialist respiratory centres worldwide. We recruited 315 patients who had severe hyperinflation (ratio of residual volume [RV] to total lung capacity of ≥0·65). By computer using a random number generator, we randomly allocated participants (in a 2:1 ratio) to either airway bypass (n=208) or sham control (107). We divided investigators into team A (masked), who completed pre-procedure and post-procedure assessments, and team B (unmasked), who only did bronchoscopies without further interaction with patients. Participants were followed up for 12 months. The 6-month co-primary efficacy endpoint required 12% or greater improvement in forced vital capacity (FVC) and 1 point or greater decrease in the modified Medical Research Council dyspnoea score from baseline. The composite primary safety endpoint incorporated five severe adverse events. We did Bayesian analysis to show the posterior probability that airway bypass was superior to sham control (success threshold, 0·965). Analysis was by intention to treat. This study is registered with ClinicalTrials.gov, number NCT00391612. All recruited patients were included in the analysis. At 6 months, no difference between treatment arms was noted with respect to the co-primary efficacy endpoint (30 of 208 for airway bypass vs 12 of 107 for sham control; posterior probability 0·749, below the Bayesian success threshold of 0·965). The 6-month composite primary safety endpoint was 14·4% (30 of 208) for airway bypass versus 11·2% (12 of 107) for sham control (judged non-inferior, with a posterior probability of 1·00 [Bayesian success threshold >0·95]). Although our findings showed safety and transient improvements, no sustainable benefit was recorded with airway bypass in patients with severe homogeneous emphysema. Broncus Technologies. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  6. Covariance Based Pre-Filters and Screening Criteria for Conjunction Analysis

    NASA Astrophysics Data System (ADS)

    George, E., Chan, K.

    2012-09-01

    Several relationships are developed relating object size, initial covariance and range at closest approach to probability of collision. These relationships address the following questions: - Given the objects' initial covariance and combined hard body size, what is the maximum possible value of the probability of collision (Pc)? - Given the objects' initial covariance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the combined hard body radius, what is the minimum miss distance for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the miss distance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? The first relationship above allows the elimination of object pairs from conjunction analysis (CA) on the basis of the initial covariance and hard-body sizes of the objects. The application of this pre-filter to present day catalogs with estimated covariance results in the elimination of approximately 35% of object pairs as unable to ever conjunct with a probability of collision exceeding 1x10-6. Because Pc is directly proportional to object size and inversely proportional to covariance size, this pre-filter will have a significantly larger impact on future catalogs, which are expected to contain a much larger fraction of small debris tracked only by a limited subset of available sensors. This relationship also provides a mathematically rigorous basis for eliminating objects from analysis entirely based on element set age or quality - a practice commonly done by rough rules of thumb today. Further, these relations can be used to determine the required geometric screening radius for all objects. This analysis reveals the screening volumes for small objects are much larger than needed, while the screening volumes for pairs of large objects may be inadequate. These relationships may also form the basis of an important metric for catalog maintenance by defining the maximum allowable covariance size for effective conjunction analysis. The application of these techniques promises to greatly improve the efficiency and completeness of conjunction analysis.

  7. Femoral Component External Rotation Affects Knee Biomechanics: A Computational Model of Posterior-stabilized TKA.

    PubMed

    Kia, Mohammad; Wright, Timothy M; Cross, Michael B; Mayman, David J; Pearle, Andrew D; Sculco, Peter K; Westrich, Geoffrey H; Imhauser, Carl W

    2018-01-01

    The correct amount of external rotation of the femoral component during TKA is controversial because the resulting changes in biomechanical knee function associated with varying degrees of femoral component rotation are not well understood. We addressed this question using a computational model, which allowed us to isolate the biomechanical impact of geometric factors including bony shapes, location of ligament insertions, and implant size across three different knees after posterior-stabilized (PS) TKA. Using a computational model of the tibiofemoral joint, we asked: (1) Does external rotation unload the medial collateral ligament (MCL) and what is the effect on lateral collateral ligament tension? (2) How does external rotation alter tibiofemoral contact loads and kinematics? (3) Does 3° external rotation relative to the posterior condylar axis align the component to the surgical transepicondylar axis (sTEA) and what anatomic factors of the femoral condyle explain variations in maximum MCL tension among knees? We incorporated a PS TKA into a previously developed computational knee model applied to three neutrally aligned, nonarthritic, male cadaveric knees. The computational knee model was previously shown to corroborate coupled motions and ligament loading patterns of the native knee through a range of flexion. Implant geometries were virtually installed using hip-to-ankle CT scans through measured resection and anterior referencing surgical techniques. Collateral ligament properties were standardized across each knee model by defining stiffness and slack lengths based on the healthy population. The femoral component was externally rotated from 0° to 9° relative to the posterior condylar axis in 3° increments. At each increment, the knee was flexed under 500 N compression from 0° to 90° simulating an intraoperative examination. The computational model predicted collateral ligament forces, compartmental contact forces, and tibiofemoral internal/external and varus-valgus rotation through the flexion range. The computational model predicted that femoral component external rotation relative to the posterior condylar axis unloads the MCL and the medial compartment; however, these effects were inconsistent from knee to knee. When the femoral component was externally rotated by 9° rather than 0° in knees one, two, and three, the maximum force carried by the MCL decreased a respective 55, 88, and 297 N; the medial contact forces decreased at most a respective 90, 190, and 570 N; external tibial rotation in early flexion increased by a respective 4.6°, 1.1°, and 3.3°; and varus angulation of the tibia relative to the femur in late flexion increased by 8.4°, 8.0°, and 7.9°, respectively. With 3° of femoral component external rotation relative to the posterior condylar axis, the femoral component was still externally rotated by up to 2.7° relative to the sTEA in these three neutrally aligned knees. Variations in MCL force from knee to knee with 3° of femoral component external rotation were related to the ratio of the distances from the femoral insertion of the MCL to the posterior and distal cuts of the implant; the closer this ratio was to 1, the more uniform were the MCL tensions from 0° to 90° flexion. A larger ratio of distances from the femoral insertion of the MCL to the posterior and distal cuts may cause clinically relevant increases in both MCL tension and compartmental contact forces. To obtain more consistent ligament tensions through flexion, it may be important to locate the posterior and distal aspects of the femoral component with respect to the proximal insertion of the MCL such that a ratio of 1 is achieved.

  8. Is there a difference between the effects of single and triple indirect moxibustion stimulations on skin temperature changes of the posterior trunk surface?

    PubMed

    Mori, Hidetoshi; Kuge, Hiroshi; Tanaka, Tim Hideaki; Taniwaki, Eiichi; Ohsawa, Hideo

    2011-06-01

    To determine whether any difference exists in responses to indirect moxibustion (IM) relative to thermal stimulation duration. In experiment 1, 9 subjects attended two experimental sessions consisting of single stimulation with IM or triple stimulation with IM, using a crossover design. A K-type thermocouple temperature probe was fixed on the skin surface at the GV14 acupuncture point. IM stimulation was administered to the top of the probe in order to measure the temperature curve. In addition, each subject evaluated his or her subjective feeling of heat on a visual analogue scale after each stimulation. Experiment 2 was conducted on 42 participants, divided into three groups according to the envelope allocation method: single stimulation with IM (n=20), triple stimulation with IM (n=11) and a control group (n=11). A thermograph was used to obtain the skin temperature on the posterior trunk of the participant. To analyse skin temperature, four arbitrary frames (the scapular, interscapular, lumbar and vertebral regions) were made on the posterior trunk. In experiment 1, no significant difference in maximum temperature was found in IM and subjective feeling of heat intensity between single and triple stimulation with IM. In experiment 2, increases in skin temperature occurred on the posterior trunk, but no differences in skin temperature occurred between the groups receiving single and triple stimulation with IM. No difference exists in the skin temperature response to moxibustion between the single and triple stimulation with IM.

  9. Contact Kinematics Correlates to Tibial Component Migration Following Single Radius Posterior Stabilized Knee Replacement.

    PubMed

    Teeter, Matthew G; Perry, Kevin I; Yuan, Xunhua; Howard, James L; Lanting, Brent A

    2018-03-01

    Contact kinematics between total knee arthroplasty components is thought to affect implant migration; however, the interaction between kinematics and tibial component migration has not been thoroughly examined in a modern implant system. A total of 24 knees from 23 patients undergoing total knee arthroplasty with a single radius, posterior stabilized implant were examined. Patients underwent radiostereometric analysis at 2 and 6 weeks, 3 and 6 months, and 1 and 2 years to measure migration of the tibial component in all planes. At 1 year, patients also had standing radiostereometric analysis examinations acquired in 0°, 20°, 40°, and 60° of flexion, and the location of contact and magnitude of any condylar liftoff was measured for each flexion angle. Regression analysis was performed between kinematic variables and migration at 1 year. The average magnitude of maximum total point motion across all patients was 0.671 ± 0.270 mm at 1 year and 0.608 ± 0.359 mm at 2 years (P = .327). Four implants demonstrated continuous migration of >0.2 mm between the first and second year of implantation. There were correlations between the location of contact and tibial component anterior-posterior tilt, varus-valgus tilt, and anterior-posterior translation. The patients with continuous migration demonstrated atypical kinematics and condylar liftoff in some instances. Kinematics can influence tibial component migration, likely through alterations of force transmission. Abnormal kinematics may play a role in long-term implant loosening. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Embedding the results of focussed Bayesian fusion into a global context

    NASA Astrophysics Data System (ADS)

    Sander, Jennifer; Heizmann, Michael

    2014-05-01

    Bayesian statistics offers a well-founded and powerful fusion methodology also for the fusion of heterogeneous information sources. However, except in special cases, the needed posterior distribution is not analytically derivable. As consequence, Bayesian fusion may cause unacceptably high computational and storage costs in practice. Local Bayesian fusion approaches aim at reducing the complexity of the Bayesian fusion methodology significantly. This is done by concentrating the actual Bayesian fusion on the potentially most task relevant parts of the domain of the Properties of Interest. Our research on these approaches is motivated by an analogy to criminal investigations where criminalists pursue clues also only locally. This publication follows previous publications on a special local Bayesian fusion technique called focussed Bayesian fusion. Here, the actual calculation of the posterior distribution gets completely restricted to a suitably chosen local context. By this, the global posterior distribution is not completely determined. Strategies for using the results of a focussed Bayesian analysis appropriately are needed. In this publication, we primarily contrast different ways of embedding the results of focussed Bayesian fusion explicitly into a global context. To obtain a unique global posterior distribution, we analyze the application of the Maximum Entropy Principle that has been shown to be successfully applicable in metrology and in different other areas. To address the special need for making further decisions subsequently to the actual fusion task, we further analyze criteria for decision making under partial information.

  11. Effect of load, cadence, and fatigue on tibio-femoral joint force during a half squat.

    PubMed

    Hattin, H C; Pierrynowski, M R; Ball, K A

    1989-10-01

    Ten male university student volunteers were selected to investigate the 3D articular force at the tibio-femoral joint during a half squat exercise, as affected by cadence, different barbell loads, and fatigue. Each subject was required to perform a half squat exercise with a barbell weight centered across the shoulders at two different cadences (1 and 2 s intervals) and three different loads (15, 22 and 30% of the one repetition maximum). Fifty repetitions at each experimental condition were recorded with an active optoelectronic kinematic data capture system (WATSMART) and a force plate (Kistler). Processing the data involved a photogrammetric technique to obtain subject tailored anthropometric data. The findings of this study were: 1) the maximal antero-posterior shear and compressive force consistently occurred at the lowest position of the weight, and the forces were very symmetrically disposed on either side of this halfway point; 2) the medio-lateral shear forces were small over the squat cycle with few peaks and troughs; 3) cadence increased the antero-posterior shear (50%) and the compressive forces (28%); 4) as a subject fatigues, load had a significant effect on the antero-posterior shear force; 5) fatigue increased all articular force components but it did not manifest itself until about halfway through the 50 repetitions of the exercise; 6) the antero-posterior shear force was most affected by fatigue; 7) cadence had a significant effect on fatigue for the medio-lateral shear and compressive forces.

  12. Trichuris colobae n. sp. (Nematoda: Trichuridae), a new species of Trichuris from Colobus guereza kikuyensis.

    PubMed

    Cutillas, Cristina; de Rojas, Manuel; Zurita, Antonio; Oliveros, Rocío; Callejón, Rocío

    2014-07-01

    In the present work, a morphological and biometrical study of whipworms Trichuris Roederer, 1761 (Nematoda: Trichuridae) parasitizing Colobus guereza kikuyensis has been carried out. Biometrical and statistical data showed that the mean values of individual variables between Trichuris suis and Trichuris sp. from C. g. kikuyensis differed significantly (P < 0.001) when Student's t test was performed: seven male variables (width of esophageal region of body, maximum width of posterior region of body, width in the place of junction of esophagus and the intestine, length of bacillary stripes, length of spicule, length of ejaculatory duct, and distance between posterior part of testis and tail end of body) and three female variables (width of posterior region of body, length of bacillary stripes, and distance of tail end of body and posterior fold of seminal receptacle). The combination of these characters permitted the discrimination of T. suis with respect to Trichuris sp. from C. g. kikuyensis, suggesting a new species of Trichuris. Furthermore, males of Trichuris sp. from C. g. kikuyensis showed a typical subterminal pericloacal papillae associated to a cluster of small papillae that were absent in males of T. suis, while females of Trichuris from Colobus appeared with a vulval region elevated/over-mounted showing a crater-like appearance. The everted vagina showed typical triangular sharp spines by optical microscopy and SEM. Thus, the existence of a new species of Trichuris parasitizing C. g. kikuyensis has been proposed.

  13. Objective estimates based on experimental data and initial and final knowledge

    NASA Technical Reports Server (NTRS)

    Rosenbaum, B. M.

    1972-01-01

    An extension of the method of Jaynes, whereby least biased probability estimates are obtained, permits such estimates to be made which account for experimental data on hand as well as prior and posterior knowledge. These estimates can be made for both discrete and continuous sample spaces. The method allows a simple interpretation of Laplace's two rules: the principle of insufficient reason and the rule of succession. Several examples are analyzed by way of illustration.

  14. [Convergence nystagmus and vertical gaze palsy of vascular origin].

    PubMed

    Jouvent, E; Benisty, S; Fenelon, G; Créange, A; Pierrot-Deseilligny, C

    2005-05-01

    A case of convergence-retraction nystagmus with upward vertical gaze paralysis and skew deviation (right hypotropia), without any other neurological signs, is reported. The probably vascular lesion was located at the mesodiencephalic junction, lying between the right border of the posterior commissure, the right interstitial nucleus of Cajal and the periaqueductal grey matter, accounting for the three ocular motor signs. The particular interest of this case is due to the relative smallness of the lesion.

  15. Unified Description of Scattering and Propagation FY15 Annual Report

    DTIC Science & Technology

    2015-09-30

    the Texas coast. For both cases a conditional posterior probability distribution ( PPD ) is formed for a parameter space that includes both geoacoustic...for this second application of ME. For each application of ME it is important to note that a new likelihood function and thus PPD is computed. One...the 50-700 Hz band. These data offered a means by which the results of using the ship radiated noise could be partially validated. The conditional PPD

  16. A Meta-Analysis and Multisite Time-Series Analysis of the Differential Toxicity of Major Fine Particulate Matter Constituents

    PubMed Central

    Levy, Jonathan I.; Diez, David; Dou, Yiping; Barr, Christopher D.; Dominici, Francesca

    2012-01-01

    Health risk assessments of particulate matter less than 2.5 μm in diameter (PM2.5) often assume that all constituents of PM2.5 are equally toxic. While investigators in previous epidemiologic studies have evaluated health risks from various PM2.5 constituents, few have conducted the analyses needed to directly inform risk assessments. In this study, the authors performed a literature review and conducted a multisite time-series analysis of hospital admissions and exposure to PM2.5 constituents (elemental carbon, organic carbon matter, sulfate, and nitrate) in a population of 12 million US Medicare enrollees for the period 2000–2008. The literature review illustrated a general lack of multiconstituent models or insight about probabilities of differential impacts per unit of concentration change. Consistent with previous results, the multisite time-series analysis found statistically significant associations between short-term changes in elemental carbon and cardiovascular hospital admissions. Posterior probabilities from multiconstituent models provided evidence that some individual constituents were more toxic than others, and posterior parameter estimates coupled with correlations among these estimates provided necessary information for risk assessment. Ratios of constituent toxicities, commonly used in risk assessment to describe differential toxicity, were extremely uncertain for all comparisons. These analyses emphasize the subtlety of the statistical techniques and epidemiologic studies necessary to inform risk assessments of particle constituents. PMID:22510275

  17. Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence

    NASA Astrophysics Data System (ADS)

    Lewis, Nicholas; Grünwald, Peter

    2018-03-01

    Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).

  18. Scheduling structural health monitoring activities for optimizing life-cycle costs and reliability of wind turbines

    NASA Astrophysics Data System (ADS)

    Hanish Nithin, Anu; Omenzetter, Piotr

    2017-04-01

    Optimization of the life-cycle costs and reliability of offshore wind turbines (OWTs) is an area of immense interest due to the widespread increase in wind power generation across the world. Most of the existing studies have used structural reliability and the Bayesian pre-posterior analysis for optimization. This paper proposes an extension to the previous approaches in a framework for probabilistic optimization of the total life-cycle costs and reliability of OWTs by combining the elements of structural reliability/risk analysis (SRA), the Bayesian pre-posterior analysis with optimization through a genetic algorithm (GA). The SRA techniques are adopted to compute the probabilities of damage occurrence and failure associated with the deterioration model. The probabilities are used in the decision tree and are updated using the Bayesian analysis. The output of this framework would determine the optimal structural health monitoring and maintenance schedules to be implemented during the life span of OWTs while maintaining a trade-off between the life-cycle costs and risk of the structural failure. Numerical illustrations with a generic deterioration model for one monitoring exercise in the life cycle of a system are demonstrated. Two case scenarios, namely to build initially an expensive and robust or a cheaper but more quickly deteriorating structures and to adopt expensive monitoring system, are presented to aid in the decision-making process.

  19. A Unified Approach to Genotype Imputation and Haplotype-Phase Inference for Large Data Sets of Trios and Unrelated Individuals

    PubMed Central

    Browning, Brian L.; Browning, Sharon R.

    2009-01-01

    We present methods for imputing data for ungenotyped markers and for inferring haplotype phase in large data sets of unrelated individuals and parent-offspring trios. Our methods make use of known haplotype phase when it is available, and our methods are computationally efficient so that the full information in large reference panels with thousands of individuals is utilized. We demonstrate that substantial gains in imputation accuracy accrue with increasingly large reference panel sizes, particularly when imputing low-frequency variants, and that unphased reference panels can provide highly accurate genotype imputation. We place our methodology in a unified framework that enables the simultaneous use of unphased and phased data from trios and unrelated individuals in a single analysis. For unrelated individuals, our imputation methods produce well-calibrated posterior genotype probabilities and highly accurate allele-frequency estimates. For trios, our haplotype-inference method is four orders of magnitude faster than the gold-standard PHASE program and has excellent accuracy. Our methods enable genotype imputation to be performed with unphased trio or unrelated reference panels, thus accounting for haplotype-phase uncertainty in the reference panel. We present a useful measure of imputation accuracy, allelic R2, and show that this measure can be estimated accurately from posterior genotype probabilities. Our methods are implemented in version 3.0 of the BEAGLE software package. PMID:19200528

  20. MAP Estimators for Piecewise Continuous Inversion

    DTIC Science & Technology

    2016-08-08

    MAP estimators for piecewise continuous inversion M M Dunlop1 and A M Stuart Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK E...Published 8 August 2016 Abstract We study the inverse problem of estimating a field ua from data comprising a finite set of nonlinear functionals of ua...then natural to study maximum a posterior (MAP) estimators. Recently (Dashti et al 2013 Inverse Problems 29 095017) it has been shown that MAP

Top