The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions
Larget, Bret
2013-01-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066
Fancher, Chris M.; Han, Zhen; Levin, Igor; Page, Katharine; Reich, Brian J.; Smith, Ralph C.; Wilson, Alyson G.; Jones, Jacob L.
2016-01-01
A Bayesian inference method for refining crystallographic structures is presented. The distribution of model parameters is stochastically sampled using Markov chain Monte Carlo. Posterior probability distributions are constructed for all model parameters to properly quantify uncertainty by appropriately modeling the heteroskedasticity and correlation of the error structure. The proposed method is demonstrated by analyzing a National Institute of Standards and Technology silicon standard reference material. The results obtained by Bayesian inference are compared with those determined by Rietveld refinement. Posterior probability distributions of model parameters provide both estimates and uncertainties. The new method better estimates the true uncertainties in the model as compared to the Rietveld method. PMID:27550221
The estimation of tree posterior probabilities using conditional clade probability distributions.
Larget, Bret
2013-07-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.
Yang, Ziheng; Zhu, Tianqi
2018-02-20
The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.
An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems
Dawson, Kevin J.; Belkhir, Khalid
2009-01-01
Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306
Multiclass Posterior Probability Twin SVM for Motor Imagery EEG Classification.
She, Qingshan; Ma, Yuliang; Meng, Ming; Luo, Zhizeng
2015-01-01
Motor imagery electroencephalography is widely used in the brain-computer interface systems. Due to inherent characteristics of electroencephalography signals, accurate and real-time multiclass classification is always challenging. In order to solve this problem, a multiclass posterior probability solution for twin SVM is proposed by the ranking continuous output and pairwise coupling in this paper. First, two-class posterior probability model is constructed to approximate the posterior probability by the ranking continuous output techniques and Platt's estimating method. Secondly, a solution of multiclass probabilistic outputs for twin SVM is provided by combining every pair of class probabilities according to the method of pairwise coupling. Finally, the proposed method is compared with multiclass SVM and twin SVM via voting, and multiclass posterior probability SVM using different coupling approaches. The efficacy on the classification accuracy and time complexity of the proposed method has been demonstrated by both the UCI benchmark datasets and real world EEG data from BCI Competition IV Dataset 2a, respectively.
BAT - The Bayesian analysis toolkit
NASA Astrophysics Data System (ADS)
Caldwell, Allen; Kollár, Daniel; Kröninger, Kevin
2009-11-01
We describe the development of a new toolkit for data analysis. The analysis package is based on Bayes' Theorem, and is realized with the use of Markov Chain Monte Carlo. This gives access to the full posterior probability distribution. Parameter estimation, limit setting and uncertainty propagation are implemented in a straightforward manner.
Serang, Oliver; MacCoss, Michael J.; Noble, William Stafford
2010-01-01
The problem of identifying proteins from a shotgun proteomics experiment has not been definitively solved. Identifying the proteins in a sample requires ranking them, ideally with interpretable scores. In particular, “degenerate” peptides, which map to multiple proteins, have made such a ranking difficult to compute. The problem of computing posterior probabilities for the proteins, which can be interpreted as confidence in a protein’s presence, has been especially daunting. Previous approaches have either ignored the peptide degeneracy problem completely, addressed it by computing a heuristic set of proteins or heuristic posterior probabilities, or by estimating the posterior probabilities with sampling methods. We present a probabilistic model for protein identification in tandem mass spectrometry that recognizes peptide degeneracy. We then introduce graph-transforming algorithms that facilitate efficient computation of protein probabilities, even for large data sets. We evaluate our identification procedure on five different well-characterized data sets and demonstrate our ability to efficiently compute high-quality protein posteriors. PMID:20712337
A Bayesian pick-the-winner design in a randomized phase II clinical trial.
Chen, Dung-Tsa; Huang, Po-Yu; Lin, Hui-Yi; Chiappori, Alberto A; Gabrilovich, Dmitry I; Haura, Eric B; Antonia, Scott J; Gray, Jhanelle E
2017-10-24
Many phase II clinical trials evaluate unique experimental drugs/combinations through multi-arm design to expedite the screening process (early termination of ineffective drugs) and to identify the most effective drug (pick the winner) to warrant a phase III trial. Various statistical approaches have been developed for the pick-the-winner design but have been criticized for lack of objective comparison among the drug agents. We developed a Bayesian pick-the-winner design by integrating a Bayesian posterior probability with Simon two-stage design in a randomized two-arm clinical trial. The Bayesian posterior probability, as the rule to pick the winner, is defined as probability of the response rate in one arm higher than in the other arm. The posterior probability aims to determine the winner when both arms pass the second stage of the Simon two-stage design. When both arms are competitive (i.e., both passing the second stage), the Bayesian posterior probability performs better to correctly identify the winner compared with the Fisher exact test in the simulation study. In comparison to a standard two-arm randomized design, the Bayesian pick-the-winner design has a higher power to determine a clear winner. In application to two studies, the approach is able to perform statistical comparison of two treatment arms and provides a winner probability (Bayesian posterior probability) to statistically justify the winning arm. We developed an integrated design that utilizes Bayesian posterior probability, Simon two-stage design, and randomization into a unique setting. It gives objective comparisons between the arms to determine the winner.
HELP: XID+, the probabilistic de-blender for Herschel SPIRE maps
NASA Astrophysics Data System (ADS)
Hurley, P. D.; Oliver, S.; Betancourt, M.; Clarke, C.; Cowley, W. I.; Duivenvoorden, S.; Farrah, D.; Griffin, M.; Lacey, C.; Le Floc'h, E.; Papadopoulos, A.; Sargent, M.; Scudder, J. M.; Vaccari, M.; Valtchanov, I.; Wang, L.
2017-01-01
We have developed a new prior-based source extraction tool, XID+, to carry out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. XID+ is developed using a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates. In this paper, we discuss the details of XID+ and demonstrate the basic capabilities and performance by running it on simulated SPIRE maps resembling the COSMOS field, and comparing to the current prior-based source extraction tool DESPHOT. Not only we show that XID+ performs better on metrics such as flux accuracy and flux uncertainty accuracy, but we also illustrate how obtaining the posterior probability distribution can help overcome some of the issues inherent with maximum-likelihood-based source extraction routines. We run XID+ on the COSMOS SPIRE maps from Herschel Multi-Tiered Extragalactic Survey using a 24-μm catalogue as a positional prior, and a uniform flux prior ranging from 0.01 to 1000 mJy. We show the marginalized SPIRE colour-colour plot and marginalized contribution to the cosmic infrared background at the SPIRE wavelengths. XID+ is a core tool arising from the Herschel Extragalactic Legacy Project (HELP) and we discuss how additional work within HELP providing prior information on fluxes can and will be utilized. The software is available at https://github.com/H-E-L-P/XID_plus. We also provide the data product for COSMOS. We believe this is the first time that the full posterior probability of galaxy photometry has been provided as a data product.
XID+: Next generation XID development
NASA Astrophysics Data System (ADS)
Hurley, Peter
2017-04-01
XID+ is a prior-based source extraction tool which carries out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. It uses a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates.
Nonlinear Demodulation and Channel Coding in EBPSK Scheme
Chen, Xianqing; Wu, Lenan
2012-01-01
The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding. PMID:23213281
Nonlinear demodulation and channel coding in EBPSK scheme.
Chen, Xianqing; Wu, Lenan
2012-01-01
The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding.
Bayes factor and posterior probability: Complementary statistical evidence to p-value.
Lin, Ruitao; Yin, Guosheng
2015-09-01
As a convention, a p-value is often computed in hypothesis testing and compared with the nominal level of 0.05 to determine whether to reject the null hypothesis. Although the smaller the p-value, the more significant the statistical test, it is difficult to perceive the p-value in a probability scale and quantify it as the strength of the data against the null hypothesis. In contrast, the Bayesian posterior probability of the null hypothesis has an explicit interpretation of how strong the data support the null. We make a comparison of the p-value and the posterior probability by considering a recent clinical trial. The results show that even when we reject the null hypothesis, there is still a substantial probability (around 20%) that the null is true. Not only should we examine whether the data would have rarely occurred under the null hypothesis, but we also need to know whether the data would be rare under the alternative. As a result, the p-value only provides one side of the information, for which the Bayes factor and posterior probability may offer complementary evidence. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
Markolf, K L; Kochan, A; Amstutz, H C
1984-02-01
Thirty-five patients with documented absence of the anterior cruciate ligament were tested on the University of California, Los Angeles, instrumented clinical knee-testing apparatus and we measured the response curves for the following testing modes: anterior-posterior force versus displacement at full extension and at 20 and 90 degrees of flexion; varus-valgus moment versus angulation at full extension and 20 degrees of flexion; and tibial torque versus rotation at 20 degrees of flexion. Absolute values of stiffness and laxity and right-left differences for these injured knees were compared with identical quantities measured previously for a control population of forty-nine normal subjects with no history of treatment for injury to the knee. For both the uninjured knees and the knees without an anterior cruciate ligament, at 20 and 90 degrees of flexion the anterior-posterior laxity was greatest at approximately 15 degrees of external rotation of the foot. The injured knees demonstrated significantly increased total anterior-posterior laxity and decreased anterior stiffness when compared with the uninjured knees in all tested positions of the foot and knee. The mean increase in paired anterior-posterior laxity for the injured knees in this group of patients at +/- 200 newtons of applied anterior-posterior force was 3.1 millimeters (+39 per cent) at full extension, 5.5 millimeters (+57 per cent) at 20 degrees of flexion, and 2.5 millimeters (+34 per cent) at 90 degrees of flexion. The mean reduction in anterior stiffness for injured knees was also greatest (-54 per cent) at 20 degrees of knee flexion. Only slight reduction in posterior stiffness (-16 per cent) was measured at 20 degrees of flexion, and this probably reflected the presence of associated capsular and meniscal injuries. In the group of anterior cruciate-deficient knees, the patients with an absent medial meniscus showed greater total anterior-posterior laxity in all three positions of knee flexion than did the patients with an intact or torn meniscus. Varus-valgus laxity at full extension increased an average of 1.7 degrees (+36 per cent) for the injured knees, while varus and valgus stiffness decreased 21 per cent and 24 per cent. Absence of the medial meniscus (in a knee with absence of the anterior cruciate ligament) increased varus-valgus laxity at zero and 20 degrees of flexion.(ABSTRACT TRUNCATED AT 400 WORDS)
Learning about Posterior Probability: Do Diagrams and Elaborative Interrogation Help?
ERIC Educational Resources Information Center
Clinton, Virginia; Alibali, Martha Wagner; Nathan, Mitchel J.
2016-01-01
To learn from a text, students must make meaningful connections among related ideas in that text. This study examined the effectiveness of two methods of improving connections--elaborative interrogation and diagrams--in written lessons about posterior probability. Undergraduate students (N = 198) read a lesson in one of three questioning…
NASA Astrophysics Data System (ADS)
Zhang, Feng-Liang; Ni, Yan-Chun; Au, Siu-Kui; Lam, Heung-Fai
2016-03-01
The identification of modal properties from field testing of civil engineering structures is becoming economically viable, thanks to the advent of modern sensor and data acquisition technology. Its demand is driven by innovative structural designs and increased performance requirements of dynamic-prone structures that call for a close cross-checking or monitoring of their dynamic properties and responses. Existing instrumentation capabilities and modal identification techniques allow structures to be tested under free vibration, forced vibration (known input) or ambient vibration (unknown broadband loading). These tests can be considered complementary rather than competing as they are based on different modeling assumptions in the identification model and have different implications on costs and benefits. Uncertainty arises naturally in the dynamic testing of structures due to measurement noise, sensor alignment error, modeling error, etc. This is especially relevant in field vibration tests because the test condition in the field environment can hardly be controlled. In this work, a Bayesian statistical approach is developed for modal identification using the free vibration response of structures. A frequency domain formulation is proposed that makes statistical inference based on the Fast Fourier Transform (FFT) of the data in a selected frequency band. This significantly simplifies the identification model because only the modes dominating the frequency band need to be included. It also legitimately ignores the information in the excluded frequency bands that are either irrelevant or difficult to model, thereby significantly reducing modeling error risk. The posterior probability density function (PDF) of the modal parameters is derived rigorously from modeling assumptions and Bayesian probability logic. Computational difficulties associated with calculating the posterior statistics, including the most probable value (MPV) and the posterior covariance matrix, are addressed. Fast computational algorithms for determining the MPV are proposed so that the method can be practically implemented. In the companion paper (Part II), analytical formulae are derived for the posterior covariance matrix so that it can be evaluated without resorting to finite difference method. The proposed method is verified using synthetic data. It is also applied to modal identification of full-scale field structures.
2009-01-01
Background Marginal posterior genotype probabilities need to be computed for genetic analyses such as geneticcounseling in humans and selective breeding in animal and plant species. Methods In this paper, we describe a peeling based, deterministic, exact algorithm to compute efficiently genotype probabilities for every member of a pedigree with loops without recourse to junction-tree methods from graph theory. The efficiency in computing the likelihood by peeling comes from storing intermediate results in multidimensional tables called cutsets. Computing marginal genotype probabilities for individual i requires recomputing the likelihood for each of the possible genotypes of individual i. This can be done efficiently by storing intermediate results in two types of cutsets called anterior and posterior cutsets and reusing these intermediate results to compute the likelihood. Examples A small example is used to illustrate the theoretical concepts discussed in this paper, and marginal genotype probabilities are computed at a monogenic disease locus for every member in a real cattle pedigree. PMID:19958551
Cumulative probability of neodymium: YAG laser posterior capsulotomy after phacoemulsification.
Ando, Hiroshi; Ando, Nobuyo; Oshika, Tetsuro
2003-11-01
To retrospectively analyze the cumulative probability of neodymium:YAG (Nd:YAG) laser posterior capsulotomy after phacoemulsification and to evaluate the risk factors. Ando Eye Clinic, Kanagawa, Japan. In 3997 eyes that had phacoemulsification with an intact continuous curvilinear capsulorhexis, the cumulative probability of posterior capsulotomy was computed by Kaplan-Meier survival analysis and risk factors were analyzed using the Cox proportional hazards regression model. The variables tested were sex; age; type of cataract; preoperative best corrected visual acuity (BCVA); presence of diabetes mellitus, diabetic retinopathy, or retinitis pigmentosa; type of intraocular lens (IOL); and the year the operation was performed. The IOLs were categorized as 3-piece poly(methyl methacrylate) (PMMA), 1-piece PMMA, 3-piece silicone, and acrylic foldable. The cumulative probability of capsulotomy after cataract surgery was 1.95%, 18.50%, and 32.70% at 1, 3, and 5 years, respectively. Positive risk factors included a better preoperative BCVA (P =.0005; risk ratio [RR], 1.7; 95% confidence interval [CI], 1.3-2.5) and the presence of retinitis pigmentosa (P<.0001; RR, 6.6; 95% CI, 3.7-11.6). Women had a significantly greater probability of Nd:YAG laser posterior capsulotomy (P =.016; RR, 1.4; 95% CI, 1.1-1.8). The type of IOL was significantly related to the probability of Nd:YAG laser capsulotomy, with the foldable acrylic IOL having a significantly lower probability of capsulotomy. The 1-piece PMMA IOL had a significantly higher risk than 3-piece PMMA and 3-piece silicone IOLs. The probability of Nd:YAG laser capsulotomy was higher in women, in eyes with a better preoperative BCVA, and in patients with retinitis pigmentosa. The foldable acrylic IOL had a significantly lower probability of capsulotomy.
optBINS: Optimal Binning for histograms
NASA Astrophysics Data System (ADS)
Knuth, Kevin H.
2018-03-01
optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.
Determining X-ray source intensity and confidence bounds in crowded fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu
We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less
ERIC Educational Resources Information Center
Dardick, William R.; Mislevy, Robert J.
2016-01-01
A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…
Learn-as-you-go acceleration of cosmological parameter estimates
NASA Astrophysics Data System (ADS)
Aslanyan, Grigor; Easther, Richard; Price, Layne C.
2015-09-01
Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.
Learn-as-you-go acceleration of cosmological parameter estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aslanyan, Grigor; Easther, Richard; Price, Layne C., E-mail: g.aslanyan@auckland.ac.nz, E-mail: r.easther@auckland.ac.nz, E-mail: lpri691@aucklanduni.ac.nz
2015-09-01
Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitlymore » describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.« less
NASA Astrophysics Data System (ADS)
Krumholz, Mark R.; Adamo, Angela; Fumagalli, Michele; Wofford, Aida; Calzetti, Daniela; Lee, Janice C.; Whitmore, Bradley C.; Bright, Stacey N.; Grasha, Kathryn; Gouliermis, Dimitrios A.; Kim, Hwihyun; Nair, Preethi; Ryon, Jenna E.; Smith, Linda J.; Thilker, David; Ubeda, Leonardo; Zackrisson, Erik
2015-10-01
We investigate a novel Bayesian analysis method, based on the Stochastically Lighting Up Galaxies (slug) code, to derive the masses, ages, and extinctions of star clusters from integrated light photometry. Unlike many analysis methods, slug correctly accounts for incomplete initial mass function (IMF) sampling, and returns full posterior probability distributions rather than simply probability maxima. We apply our technique to 621 visually confirmed clusters in two nearby galaxies, NGC 628 and NGC 7793, that are part of the Legacy Extragalactic UV Survey (LEGUS). LEGUS provides Hubble Space Telescope photometry in the NUV, U, B, V, and I bands. We analyze the sensitivity of the derived cluster properties to choices of prior probability distribution, evolutionary tracks, IMF, metallicity, treatment of nebular emission, and extinction curve. We find that slug's results for individual clusters are insensitive to most of these choices, but that the posterior probability distributions we derive are often quite broad, and sometimes multi-peaked and quite sensitive to the choice of priors. In contrast, the properties of the cluster population as a whole are relatively robust against all of these choices. We also compare our results from slug to those derived with a conventional non-stochastic fitting code, Yggdrasil. We show that slug's stochastic models are generally a better fit to the observations than the deterministic ones used by Yggdrasil. However, the overall properties of the cluster populations recovered by both codes are qualitatively similar.
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Asking better questions: How presentation formats influence information search.
Wu, Charley M; Meder, Björn; Filimon, Flavia; Nelson, Jonathan D
2017-08-01
While the influence of presentation formats have been widely studied in Bayesian reasoning tasks, we present the first systematic investigation of how presentation formats influence information search decisions. Four experiments were conducted across different probabilistic environments, where subjects (N = 2,858) chose between 2 possible search queries, each with binary probabilistic outcomes, with the goal of maximizing classification accuracy. We studied 14 different numerical and visual formats for presenting information about the search environment, constructed across 6 design features that have been prominently related to improvements in Bayesian reasoning accuracy (natural frequencies, posteriors, complement, spatial extent, countability, and part-to-whole information). The posterior variants of the icon array and bar graph formats led to the highest proportion of correct responses, and were substantially better than the standard probability format. Results suggest that presenting information in terms of posterior probabilities and visualizing natural frequencies using spatial extent (a perceptual feature) were especially helpful in guiding search decisions, although environments with a mixture of probabilistic and certain outcomes were challenging across all formats. Subjects who made more accurate probability judgments did not perform better on the search task, suggesting that simple decision heuristics may be used to make search decisions without explicitly applying Bayesian inference to compute probabilities. We propose a new take-the-difference (TTD) heuristic that identifies the accuracy-maximizing query without explicit computation of posterior probabilities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Hepatitis disease detection using Bayesian theory
NASA Astrophysics Data System (ADS)
Maseleno, Andino; Hidayati, Rohmah Zahroh
2017-02-01
This paper presents hepatitis disease diagnosis using a Bayesian theory for better understanding of the theory. In this research, we used a Bayesian theory for detecting hepatitis disease and displaying the result of diagnosis process. Bayesian algorithm theory is rediscovered and perfected by Laplace, the basic idea is using of the known prior probability and conditional probability density parameter, based on Bayes theorem to calculate the corresponding posterior probability, and then obtained the posterior probability to infer and make decisions. Bayesian methods combine existing knowledge, prior probabilities, with additional knowledge derived from new data, the likelihood function. The initial symptoms of hepatitis which include malaise, fever and headache. The probability of hepatitis given the presence of malaise, fever, and headache. The result revealed that a Bayesian theory has successfully identified the existence of hepatitis disease.
Tentori, Katya; Chater, Nick; Crupi, Vincenzo
2016-04-01
Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than probability judgments. Impact judgments also predict the direction of errors in probability judgments. These findings suggest that human inductive reasoning relies more on estimating evidential impact than on posterior probability. Copyright © 2015 Cognitive Science Society, Inc.
Estimating the Probability of Traditional Copying, Conditional on Answer-Copying Statistics.
Allen, Jeff; Ghattas, Andrew
2016-06-01
Statistics for detecting copying on multiple-choice tests produce p values measuring the probability of a value at least as large as that observed, under the null hypothesis of no copying. The posterior probability of copying is arguably more relevant than the p value, but cannot be derived from Bayes' theorem unless the population probability of copying and probability distribution of the answer-copying statistic under copying are known. In this article, the authors develop an estimator for the posterior probability of copying that is based on estimable quantities and can be used with any answer-copying statistic. The performance of the estimator is evaluated via simulation, and the authors demonstrate how to apply the formula using actual data. Potential uses, generalizability to other types of cheating, and limitations of the approach are discussed.
Inference with minimal Gibbs free energy in information field theory.
Ensslin, Torsten A; Weig, Cornelius
2010-11-01
Non-linear and non-gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a gaussian signal with unknown spectrum, and (iii) inference of a poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-gaussian posterior.
Back to Normal! Gaussianizing posterior distributions for cosmological probes
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2014-05-01
We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.
NASA Astrophysics Data System (ADS)
Morton, Timothy D.; Bryson, Stephen T.; Coughlin, Jeffrey L.; Rowe, Jason F.; Ravichandran, Ganesh; Petigura, Erik A.; Haas, Michael R.; Batalha, Natalie M.
2016-05-01
We present astrophysical false positive probability calculations for every Kepler Object of Interest (KOI)—the first large-scale demonstration of a fully automated transiting planet validation procedure. Out of 7056 KOIs, we determine that 1935 have probabilities <1% of being astrophysical false positives, and thus may be considered validated planets. Of these, 1284 have not yet been validated or confirmed by other methods. In addition, we identify 428 KOIs that are likely to be false positives, but have not yet been identified as such, though some of these may be a result of unidentified transit timing variations. A side product of these calculations is full stellar property posterior samplings for every host star, modeled as single, binary, and triple systems. These calculations use vespa, a publicly available Python package that is able to be easily applied to any transiting exoplanet candidate.
Naive Probability: A Mental Model Theory of Extensional Reasoning.
ERIC Educational Resources Information Center
Johnson-Laird, P. N.; Legrenzi, Paolo; Girotto, Vittorio; Legrenzi, Maria Sonino; Caverni, Jean-Paul
1999-01-01
Outlines a theory of naive probability in which individuals who are unfamiliar with the probability calculus can infer the probabilities of events in an "extensional" way. The theory accommodates reasoning based on numerical premises, and explains how naive reasoners can infer posterior probabilities without relying on Bayes's theorem.…
Bayesian inference based on stationary Fokker-Planck sampling.
Berrones, Arturo
2010-06-01
A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Calibration of micromechanical parameters for DEM simulations by using the particle filter
NASA Astrophysics Data System (ADS)
Cheng, Hongyang; Shuku, Takayuki; Thoeni, Klaus; Yamamoto, Haruyuki
2017-06-01
The calibration of DEM models is typically accomplished by trail and error. However, the procedure lacks of objectivity and has several uncertainties. To deal with these issues, the particle filter is employed as a novel approach to calibrate DEM models of granular soils. The posterior probability distribution of the microparameters that give numerical results in good agreement with the experimental response of a Toyoura sand specimen is approximated by independent model trajectories, referred as `particles', based on Monte Carlo sampling. The soil specimen is modeled by polydisperse packings with different numbers of spherical grains. Prepared in `stress-free' states, the packings are subjected to triaxial quasistatic loading. Given the experimental data, the posterior probability distribution is incrementally updated, until convergence is reached. The resulting `particles' with higher weights are identified as the calibration results. The evolutions of the weighted averages and posterior probability distribution of the micro-parameters are plotted to show the advantage of using a particle filter, i.e., multiple solutions are identified for each parameter with known probabilities of reproducing the experimental response.
Bayesian model checking: A comparison of tests
NASA Astrophysics Data System (ADS)
Lucy, L. B.
2018-06-01
Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.
Effect of posterior crown margin placement on gingival health.
Reitemeier, Bernd; Hänsel, Kristina; Walter, Michael H; Kastner, Christian; Toutenburg, Helge
2002-02-01
The clinical impact of posterior crown margin placement on gingival health has not been thoroughly quantified. This study evaluated the effect of posterior crown margin placement with multivariate analysis. Ten general dentists reviewed 240 patients with 480 metal-ceramic crowns in a prospective clinical trial. The alloy was randomly selected from 2 high gold, 1 low gold, and 1 palladium alloy. Variables were the alloy used, oral hygiene index score before treatment, location of crown margins at baseline, and plaque index and sulcus bleeding index scores recorded for restored and control teeth after 1 year. The effect of crown margin placement on sulcular bleeding and plaque accumulation was analyzed with regression models (P<.05). The probability of plaque at 1 year increased with increasing oral hygiene index score before treatment. The lingual surfaces demonstrated the highest probability of plaque. The risk of bleeding at intrasulcular posterior crown margins was approximately twice that at supragingival margins. Poor oral hygiene before treatment and plaque also were associated with sulcular bleeding. Facial sites exhibited a lower probability of sulcular bleeding than lingual surfaces. Type of alloy did not influence sulcular bleeding. In this study, placement of crown margins was one of several parameters that affected gingival health.
Pig Data and Bayesian Inference on Multinomial Probabilities
ERIC Educational Resources Information Center
Kern, John C.
2006-01-01
Bayesian inference on multinomial probabilities is conducted based on data collected from the game Pass the Pigs[R]. Prior information on these probabilities is readily available from the instruction manual, and is easily incorporated in a Dirichlet prior. Posterior analysis of the scoring probabilities quantifies the discrepancy between empirical…
Random Partition Distribution Indexed by Pairwise Information
Dahl, David B.; Day, Ryan; Tsai, Jerry W.
2017-01-01
We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318
Danaei, Goodarz; Finucane, Mariel M; Lin, John K; Singh, Gitanjali M; Paciorek, Christopher J; Cowan, Melanie J; Farzadfar, Farshad; Stevens, Gretchen A; Lim, Stephen S; Riley, Leanne M; Ezzati, Majid
2011-02-12
Data for trends in blood pressure are needed to understand the effects of its dietary, lifestyle, and pharmacological determinants; set intervention priorities; and evaluate national programmes. However, few worldwide analyses of trends in blood pressure have been done. We estimated worldwide trends in population mean systolic blood pressure (SBP). We estimated trends and their uncertainties in mean SBP for adults 25 years and older in 199 countries and territories. We obtained data from published and unpublished health examination surveys and epidemiological studies (786 country-years and 5·4 million participants). For each sex, we used a Bayesian hierarchical model to estimate mean SBP by age, country, and year, accounting for whether a study was nationally representative. In 2008, age-standardised mean SBP worldwide was 128·1 mm Hg (95% uncertainty interval 126·7-129·4) in men and 124·4 mm Hg (123·0-125·9) in women. Globally, between 1980 and 2008, SBP decreased by 0·8 mm Hg per decade (-0·4 to 2·2, posterior probability of being a true decline=0·90) in men and 1·0 mm Hg per decade (-0·3 to 2·3, posterior probability=0·93) in women. Female SBP decreased by 3·5 mm Hg or more per decade in western Europe and Australasia (posterior probabilities ≥0·999). Male SBP fell most in high-income North America, by 2·8 mm Hg per decade (1·3-4·5, posterior probability >0·999), followed by Australasia and western Europe where it decreased by more than 2·0 mm Hg per decade (posterior probabilities >0·98). SBP rose in Oceania, east Africa, and south and southeast Asia for both sexes, and in west Africa for women, with the increases ranging 0·8-1·6 mm Hg per decade in men (posterior probabilities 0·72-0·91) and 1·0-2·7 mm Hg per decade for women (posterior probabilities 0·75-0·98). Female SBP was highest in some east and west African countries, with means of 135 mm Hg or greater. Male SBP was highest in Baltic and east and west African countries, where mean SBP reached 138 mm Hg or more. Men and women in western Europe had the highest SBP in high-income regions. On average, global population SBP decreased slightly since 1980, but trends varied significantly across regions and countries. SBP is currently highest in low-income and middle-income countries. Effective population-based and personal interventions should be targeted towards low-income and middle-income countries. Funding Bill & Melinda Gates Foundation and WHO. Copyright © 2011 Elsevier Ltd. All rights reserved.
Chang, Edward F; Breshears, Jonathan D; Raygor, Kunal P; Lau, Darryl; Molinaro, Annette M; Berger, Mitchel S
2017-01-01
OBJECTIVE Functional mapping using direct cortical stimulation is the gold standard for the prevention of postoperative morbidity during resective surgery in dominant-hemisphere perisylvian regions. Its role is necessitated by the significant interindividual variability that has been observed for essential language sites. The aim in this study was to determine the statistical probability distribution of eliciting aphasic errors for any given stereotactically based cortical position in a patient cohort and to quantify the variability at each cortical site. METHODS Patients undergoing awake craniotomy for dominant-hemisphere primary brain tumor resection between 1999 and 2014 at the authors' institution were included in this study, which included counting and picture-naming tasks during dense speech mapping via cortical stimulation. Positive and negative stimulation sites were collected using an intraoperative frameless stereotactic neuronavigation system and were converted to Montreal Neurological Institute coordinates. Data were iteratively resampled to create mean and standard deviation probability maps for speech arrest and anomia. Patients were divided into groups with a "classic" or an "atypical" location of speech function, based on the resultant probability maps. Patient and clinical factors were then assessed for their association with an atypical location of speech sites by univariate and multivariate analysis. RESULTS Across 102 patients undergoing speech mapping, the overall probabilities of speech arrest and anomia were 0.51 and 0.33, respectively. Speech arrest was most likely to occur with stimulation of the posterior inferior frontal gyrus (maximum probability from individual bin = 0.025), and variance was highest in the dorsal premotor cortex and the posterior superior temporal gyrus. In contrast, stimulation within the posterior perisylvian cortex resulted in the maximum mean probability of anomia (maximum probability = 0.012), with large variance in the regions surrounding the posterior superior temporal gyrus, including the posterior middle temporal, angular, and supramarginal gyri. Patients with atypical speech localization were far more likely to have tumors in canonical Broca's or Wernicke's areas (OR 7.21, 95% CI 1.67-31.09, p < 0.01) or to have multilobar tumors (OR 12.58, 95% CI 2.22-71.42, p < 0.01), than were patients with classic speech localization. CONCLUSIONS This study provides statistical probability distribution maps for aphasic errors during cortical stimulation mapping in a patient cohort. Thus, the authors provide an expected probability of inducing speech arrest and anomia from specific 10-mm 2 cortical bins in an individual patient. In addition, they highlight key regions of interindividual mapping variability that should be considered preoperatively. They believe these results will aid surgeons in their preoperative planning of eloquent cortex resection.
ERIC Educational Resources Information Center
Satake, Eiki; Amato, Philip P.
2008-01-01
This paper presents an alternative version of formulas of conditional probabilities and Bayes' rule that demonstrate how the truth table of elementary mathematical logic applies to the derivations of the conditional probabilities of various complex, compound statements. This new approach is used to calculate the prior and posterior probabilities…
NASA Astrophysics Data System (ADS)
Yoon, Ilsang; Weinberg, Martin D.; Katz, Neal
2011-06-01
We introduce a new galaxy image decomposition tool, GALPHAT (GALaxy PHotometric ATtributes), which is a front-end application of the Bayesian Inference Engine (BIE), a parallel Markov chain Monte Carlo package, to provide full posterior probability distributions and reliable confidence intervals for all model parameters. The BIE relies on GALPHAT to compute the likelihood function. GALPHAT generates scale-free cumulative image tables for the desired model family with precise error control. Interpolation of this table yields accurate pixellated images with any centre, scale and inclination angle. GALPHAT then rotates the image by position angle using a Fourier shift theorem, yielding high-speed, accurate likelihood computation. We benchmark this approach using an ensemble of simulated Sérsic model galaxies over a wide range of observational conditions: the signal-to-noise ratio S/N, the ratio of galaxy size to the point spread function (PSF) and the image size, and errors in the assumed PSF; and a range of structural parameters: the half-light radius re and the Sérsic index n. We characterize the strength of parameter covariance in the Sérsic model, which increases with S/N and n, and the results strongly motivate the need for the full posterior probability distribution in galaxy morphology analyses and later inferences. The test results for simulated galaxies successfully demonstrate that, with a careful choice of Markov chain Monte Carlo algorithms and fast model image generation, GALPHAT is a powerful analysis tool for reliably inferring morphological parameters from a large ensemble of galaxies over a wide range of different observational conditions.
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
2016-01-01
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
Bayesian approach to inverse statistical mechanics.
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Bayesian approach to inverse statistical mechanics
NASA Astrophysics Data System (ADS)
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Screening for SNPs with Allele-Specific Methylation based on Next-Generation Sequencing Data.
Hu, Bo; Ji, Yuan; Xu, Yaomin; Ting, Angela H
2013-05-01
Allele-specific methylation (ASM) has long been studied but mainly documented in the context of genomic imprinting and X chromosome inactivation. Taking advantage of the next-generation sequencing technology, we conduct a high-throughput sequencing experiment with four prostate cell lines to survey the whole genome and identify single nucleotide polymorphisms (SNPs) with ASM. A Bayesian approach is proposed to model the counts of short reads for each SNP conditional on its genotypes of multiple subjects, leading to a posterior probability of ASM. We flag SNPs with high posterior probabilities of ASM by accounting for multiple comparisons based on posterior false discovery rates. Applying the Bayesian approach to the in-house prostate cell line data, we identify 269 SNPs as candidates of ASM. A simulation study is carried out to demonstrate the quantitative performance of the proposed approach.
Posterior probability of linkage and maximal lod score.
Génin, E; Martinez, M; Clerget-Darpoux, F
1995-01-01
To detect linkage between a trait and a marker, Morton (1955) proposed to calculate the lod score z(theta 1) at a given value theta 1 of the recombination fraction. If z(theta 1) reaches +3 then linkage is concluded. However, in practice, lod scores are calculated for different values of the recombination fraction between 0 and 0.5 and the test is based on the maximum value of the lod score Zmax. The impact of this deviation of the test on the probability that in fact linkage does not exist, when linkage was concluded, is documented here. This posterior probability of no linkage can be derived by using Bayes' theorem. It is less than 5% when the lod score at a predetermined theta 1 is used for the test. But, for a Zmax of +3, we showed that it can reach 16.4%. Thus, considering a composite alternative hypothesis instead of a single one decreases the reliability of the test. The reliability decreases rapidly when Zmax is less than +3. Given a Zmax of +2.5, there is a 33% chance that linkage does not exist. Moreover, the posterior probability depends not only on the value of Zmax but also jointly on the family structures and on the genetic model. For a given Zmax, the chance that linkage exists may then vary.
Al-Mezaine, Hani S
2010-01-01
We report a 55-year-old man with unusually dense, unilateral central posterior capsule pigmentation associated with the characteristic clinical features of pigment dispersion syndrome, including a Krukenberg's spindle and dense trabecular pigmentation in both eyes. A history of an old blunt ocular trauma probably caused separation of the anterior hyaloid from the back of the lens, thereby creating an avenue by which pigment could reach the potential space of Berger's from the posterior chamber.
Al-Mezaine, Hani S
2010-01-01
We report a 55-year-old man with unusually dense, unilateral central posterior capsule pigmentation associated with the characteristic clinical features of pigment dispersion syndrome, including a Krukenberg's spindle and dense trabecular pigmentation in both eyes. A history of an old blunt ocular trauma probably caused separation of the anterior hyaloid from the back of the lens, thereby creating an avenue by which pigment could reach the potential space of Berger's from the posterior chamber. PMID:20534930
Browning, Brian L.; Browning, Sharon R.
2009-01-01
We present methods for imputing data for ungenotyped markers and for inferring haplotype phase in large data sets of unrelated individuals and parent-offspring trios. Our methods make use of known haplotype phase when it is available, and our methods are computationally efficient so that the full information in large reference panels with thousands of individuals is utilized. We demonstrate that substantial gains in imputation accuracy accrue with increasingly large reference panel sizes, particularly when imputing low-frequency variants, and that unphased reference panels can provide highly accurate genotype imputation. We place our methodology in a unified framework that enables the simultaneous use of unphased and phased data from trios and unrelated individuals in a single analysis. For unrelated individuals, our imputation methods produce well-calibrated posterior genotype probabilities and highly accurate allele-frequency estimates. For trios, our haplotype-inference method is four orders of magnitude faster than the gold-standard PHASE program and has excellent accuracy. Our methods enable genotype imputation to be performed with unphased trio or unrelated reference panels, thus accounting for haplotype-phase uncertainty in the reference panel. We present a useful measure of imputation accuracy, allelic R2, and show that this measure can be estimated accurately from posterior genotype probabilities. Our methods are implemented in version 3.0 of the BEAGLE software package. PMID:19200528
Screening for SNPs with Allele-Specific Methylation based on Next-Generation Sequencing Data
Hu, Bo; Xu, Yaomin
2013-01-01
Allele-specific methylation (ASM) has long been studied but mainly documented in the context of genomic imprinting and X chromosome inactivation. Taking advantage of the next-generation sequencing technology, we conduct a high-throughput sequencing experiment with four prostate cell lines to survey the whole genome and identify single nucleotide polymorphisms (SNPs) with ASM. A Bayesian approach is proposed to model the counts of short reads for each SNP conditional on its genotypes of multiple subjects, leading to a posterior probability of ASM. We flag SNPs with high posterior probabilities of ASM by accounting for multiple comparisons based on posterior false discovery rates. Applying the Bayesian approach to the in-house prostate cell line data, we identify 269 SNPs as candidates of ASM. A simulation study is carried out to demonstrate the quantitative performance of the proposed approach. PMID:23710259
Generative adversarial networks for brain lesion detection
NASA Astrophysics Data System (ADS)
Alex, Varghese; Safwan, K. P. Mohammed; Chennamsetty, Sai Saketh; Krishnamurthi, Ganapathy
2017-02-01
Manual segmentation of brain lesions from Magnetic Resonance Images (MRI) is cumbersome and introduces errors due to inter-rater variability. This paper introduces a semi-supervised technique for detection of brain lesion from MRI using Generative Adversarial Networks (GANs). GANs comprises of a Generator network and a Discriminator network which are trained simultaneously with the objective of one bettering the other. The networks were trained using non lesion patches (n=13,000) from 4 different MR sequences. The network was trained on BraTS dataset and patches were extracted from regions excluding tumor region. The Generator network generates data by modeling the underlying probability distribution of the training data, (PData). The Discriminator learns the posterior probability P (Label Data) by classifying training data and generated data as "Real" or "Fake" respectively. The Generator upon learning the joint distribution, produces images/patches such that the performance of the Discriminator on them are random, i.e. P (Label Data = GeneratedData) = 0.5. During testing, the Discriminator assigns posterior probability values close to 0.5 for patches from non lesion regions, while patches centered on lesion arise from a different distribution (PLesion) and hence are assigned lower posterior probability value by the Discriminator. On the test set (n=14), the proposed technique achieves whole tumor dice score of 0.69, sensitivity of 91% and specificity of 59%. Additionally the generator network was capable of generating non lesion patches from various MR sequences.
Graphical methods for the sensitivity analysis in discriminant analysis
Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang
2015-09-30
Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less
A Bayesian Approach to Evaluating Consistency between Climate Model Output and Observations
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Cressie, N.; Teixeira, J.
2010-12-01
Like other scientific and engineering problems that involve physical modeling of complex systems, climate models can be evaluated and diagnosed by comparing their output to observations of similar quantities. Though the global remote sensing data record is relatively short by climate research standards, these data offer opportunities to evaluate model predictions in new ways. For example, remote sensing data are spatially and temporally dense enough to provide distributional information that goes beyond simple moments to allow quantification of temporal and spatial dependence structures. In this talk, we propose a new method for exploiting these rich data sets using a Bayesian paradigm. For a collection of climate models, we calculate posterior probabilities its members best represent the physical system each seeks to reproduce. The posterior probability is based on the likelihood that a chosen summary statistic, computed from observations, would be obtained when the model's output is considered as a realization from a stochastic process. By exploring how posterior probabilities change with different statistics, we may paint a more quantitative and complete picture of the strengths and weaknesses of the models relative to the observations. We demonstrate our method using model output from the CMIP archive, and observations from NASA's Atmospheric Infrared Sounder.
Topics in inference and decision-making with partial knowledge
NASA Technical Reports Server (NTRS)
Safavian, S. Rasoul; Landgrebe, David
1990-01-01
Two essential elements needed in the process of inference and decision-making are prior probabilities and likelihood functions. When both of these components are known accurately and precisely, the Bayesian approach provides a consistent and coherent solution to the problems of inference and decision-making. In many situations, however, either one or both of the above components may not be known, or at least may not be known precisely. This problem of partial knowledge about prior probabilities and likelihood functions is addressed. There are at least two ways to cope with this lack of precise knowledge: robust methods, and interval-valued methods. First, ways of modeling imprecision and indeterminacies in prior probabilities and likelihood functions are examined; then how imprecision in the above components carries over to the posterior probabilities is examined. Finally, the problem of decision making with imprecise posterior probabilities and the consequences of such actions are addressed. Application areas where the above problems may occur are in statistical pattern recognition problems, for example, the problem of classification of high-dimensional multispectral remote sensing image data.
A Bayesian predictive two-stage design for phase II clinical trials.
Sambucini, Valeria
2008-04-15
In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.
Bayesian operational modal analysis with asynchronous data, Part II: Posterior uncertainty
NASA Astrophysics Data System (ADS)
Zhu, Yi-Chen; Au, Siu-Kui
2018-01-01
A Bayesian modal identification method has been proposed in the companion paper that allows the most probable values of modal parameters to be determined using asynchronous ambient vibration data. This paper investigates the identification uncertainty of modal parameters in terms of their posterior covariance matrix. Computational issues are addressed. Analytical expressions are derived to allow the posterior covariance matrix to be evaluated accurately and efficiently. Synthetic, laboratory and field data examples are presented to verify the consistency, investigate potential modelling error and demonstrate practical applications.
NASA Astrophysics Data System (ADS)
Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.
2012-12-01
We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
La Russa, D
Purpose: The purpose of this project is to develop a robust method of parameter estimation for a Poisson-based TCP model using Bayesian inference. Methods: Bayesian inference was performed using the PyMC3 probabilistic programming framework written in Python. A Poisson-based TCP regression model that accounts for clonogen proliferation was fit to observed rates of local relapse as a function of equivalent dose in 2 Gy fractions for a population of 623 stage-I non-small-cell lung cancer patients. The Slice Markov Chain Monte Carlo sampling algorithm was used to sample the posterior distributions, and was initiated using the maximum of the posterior distributionsmore » found by optimization. The calculation of TCP with each sample step required integration over the free parameter α, which was performed using an adaptive 24-point Gauss-Legendre quadrature. Convergence was verified via inspection of the trace plot and posterior distribution for each of the fit parameters, as well as with comparisons of the most probable parameter values with their respective maximum likelihood estimates. Results: Posterior distributions for α, the standard deviation of α (σ), the average tumour cell-doubling time (Td), and the repopulation delay time (Tk), were generated assuming α/β = 10 Gy, and a fixed clonogen density of 10{sup 7} cm−{sup 3}. Posterior predictive plots generated from samples from these posterior distributions are in excellent agreement with the observed rates of local relapse used in the Bayesian inference. The most probable values of the model parameters also agree well with maximum likelihood estimates. Conclusion: A robust method of performing Bayesian inference of TCP data using a complex TCP model has been established.« less
Statistical Inference in Hidden Markov Models Using k-Segment Constraints
Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher
2016-01-01
Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674
Elastic K-means using posterior probability.
Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris
2017-01-01
The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model.
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
NASA Astrophysics Data System (ADS)
Park, K.-R.; Kim, K.-h.; Kwak, S.; Svensson, J.; Lee, J.; Ghim, Y.-c.
2017-11-01
Feasibility study of direct spectra measurements of Thomson scattered photons for fusion-grade plasmas is performed based on a forward model of the KSTAR Thomson scattering system. Expected spectra in the forward model are calculated based on Selden function including the relativistic polarization correction. Noise in the signal is modeled with photon noise and Gaussian electrical noise. Electron temperature and density are inferred using Bayesian probability theory. Based on bias error, full width at half maximum and entropy of posterior distributions, spectral measurements are found to be feasible. Comparisons between spectrometer-based and polychromator-based Thomson scattering systems are performed with varying quantum efficiency and electrical noise levels.
Wijeysundera, Duminda N; Austin, Peter C; Hux, Janet E; Beattie, W Scott; Laupacis, Andreas
2009-01-01
Randomized trials generally use "frequentist" statistics based on P-values and 95% confidence intervals. Frequentist methods have limitations that might be overcome, in part, by Bayesian inference. To illustrate these advantages, we re-analyzed randomized trials published in four general medical journals during 2004. We used Medline to identify randomized superiority trials with two parallel arms, individual-level randomization and dichotomous or time-to-event primary outcomes. Studies with P<0.05 in favor of the intervention were deemed "positive"; otherwise, they were "negative." We used several prior distributions and exact conjugate analyses to calculate Bayesian posterior probabilities for clinically relevant effects. Of 88 included studies, 39 were positive using a frequentist analysis. Although the Bayesian posterior probabilities of any benefit (relative risk or hazard ratio<1) were high in positive studies, these probabilities were lower and variable for larger benefits. The positive studies had only moderate probabilities for exceeding the effects that were assumed for calculating the sample size. By comparison, there were moderate probabilities of any benefit in negative studies. Bayesian and frequentist analyses complement each other when interpreting the results of randomized trials. Future reports of randomized trials should include both.
Assessment of accident severity in the construction industry using the Bayesian theorem.
Alizadeh, Seyed Shamseddin; Mortazavi, Seyed Bagher; Mehdi Sepehri, Mohammad
2015-01-01
Construction is a major source of employment in many countries. In construction, workers perform a great diversity of activities, each one with a specific associated risk. The aim of this paper is to identify workers who are at risk of accidents with severe consequences and classify these workers to determine appropriate control measures. We defined 48 groups of workers and used the Bayesian theorem to estimate posterior probabilities about the severity of accidents at the level of individuals in construction sector. First, the posterior probabilities of injuries based on four variables were provided. Then the probabilities of injury for 48 groups of workers were determined. With regard to marginal frequency of injury, slight injury (0.856), fatal injury (0.086) and severe injury (0.058) had the highest probability of occurrence. It was observed that workers with <1 year's work experience (0.168) had the highest probability of injury occurrence. The first group of workers, who were extensively exposed to risk of severe and fatal accidents, involved workers ≥ 50 years old, married, with 1-5 years' work experience, who had no past accident experience. The findings provide a direction for more effective safety strategies and occupational accident prevention and emergency programmes.
Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B
2003-11-01
The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.
Musella, Vincenzo; Rinaldi, Laura; Lagazio, Corrado; Cringoli, Giuseppe; Biggeri, Annibale; Catelan, Dolores
2014-09-15
Model-based geostatistics and Bayesian approaches are appropriate in the context of Veterinary Epidemiology when point data have been collected by valid study designs. The aim is to predict a continuous infection risk surface. Little work has been done on the use of predictive infection probabilities at farm unit level. In this paper we show how to use predictive infection probability and related uncertainty from a Bayesian kriging model to draw a informative samples from the 8794 geo-referenced sheep farms of the Campania region (southern Italy). Parasitological data come from a first cross-sectional survey carried out to study the spatial distribution of selected helminths in sheep farms. A grid sampling was performed to select the farms for coprological examinations. Faecal samples were collected for 121 sheep farms and the presence of 21 different helminths were investigated using the FLOTAC technique. The 21 responses are very different in terms of geographical distribution and prevalence of infection. The observed prevalence range is from 0.83% to 96.69%. The distributions of the posterior predictive probabilities for all the 21 parasites are very heterogeneous. We show how the results of the Bayesian kriging model can be used to plan a second wave survey. Several alternatives can be chosen depending on the purposes of the second survey: weight by posterior predictive probabilities, their uncertainty or combining both information. The proposed Bayesian kriging model is simple, and the proposed samping strategy represents a useful tool to address targeted infection control treatments and surbveillance campaigns. It is easily extendable to other fields of research. Copyright © 2014 Elsevier B.V. All rights reserved.
Abdul-Latiff, Muhammad Abu Bakar; Ruslin, Farhani; Fui, Vun Vui; Abu, Mohd-Hashim; Rovie-Ryan, Jeffrine Japning; Abdul-Patah, Pazil; Lakim, Maklarin; Roos, Christian; Yaakop, Salmah; Md-Zain, Badrul Munir
2014-01-01
Abstract Phylogenetic relationships among Malaysia’s long-tailed macaques have yet to be established, despite abundant genetic studies of the species worldwide. The aims of this study are to examine the phylogenetic relationships of Macaca fascicularis in Malaysia and to test its classification as a morphological subspecies. A total of 25 genetic samples of M. fascicularis yielding 383 bp of Cytochrome b (Cyt b) sequences were used in phylogenetic analysis along with one sample each of M. nemestrina and M. arctoides used as outgroups. Sequence character analysis reveals that Cyt b locus is a highly conserved region with only 23% parsimony informative character detected among ingroups. Further analysis indicates a clear separation between populations originating from different regions; the Malay Peninsula versus Borneo Insular, the East Coast versus West Coast of the Malay Peninsula, and the island versus mainland Malay Peninsula populations. Phylogenetic trees (NJ, MP and Bayesian) portray a consistent clustering paradigm as Borneo’s population was distinguished from Peninsula’s population (99% and 100% bootstrap value in NJ and MP respectively and 1.00 posterior probability in Bayesian trees). The East coast population was separated from other Peninsula populations (64% in NJ, 66% in MP and 0.53 posterior probability in Bayesian). West coast populations were divided into 2 clades: the North-South (47%/54% in NJ, 26/26% in MP and 1.00/0.80 posterior probability in Bayesian) and Island-Mainland (93% in NJ, 90% in MP and 1.00 posterior probability in Bayesian). The results confirm the previous morphological assignment of 2 subspecies, M. f. fascicularis and M. f. argentimembris, in the Malay Peninsula. These populations should be treated as separate genetic entities in order to conserve the genetic diversity of Malaysia’s M. fascicularis. These findings are crucial in aiding the conservation management and translocation process of M. fascicularis populations in Malaysia. PMID:24899832
Abdul-Latiff, Muhammad Abu Bakar; Ruslin, Farhani; Fui, Vun Vui; Abu, Mohd-Hashim; Rovie-Ryan, Jeffrine Japning; Abdul-Patah, Pazil; Lakim, Maklarin; Roos, Christian; Yaakop, Salmah; Md-Zain, Badrul Munir
2014-01-01
Phylogenetic relationships among Malaysia's long-tailed macaques have yet to be established, despite abundant genetic studies of the species worldwide. The aims of this study are to examine the phylogenetic relationships of Macaca fascicularis in Malaysia and to test its classification as a morphological subspecies. A total of 25 genetic samples of M. fascicularis yielding 383 bp of Cytochrome b (Cyt b) sequences were used in phylogenetic analysis along with one sample each of M. nemestrina and M. arctoides used as outgroups. Sequence character analysis reveals that Cyt b locus is a highly conserved region with only 23% parsimony informative character detected among ingroups. Further analysis indicates a clear separation between populations originating from different regions; the Malay Peninsula versus Borneo Insular, the East Coast versus West Coast of the Malay Peninsula, and the island versus mainland Malay Peninsula populations. Phylogenetic trees (NJ, MP and Bayesian) portray a consistent clustering paradigm as Borneo's population was distinguished from Peninsula's population (99% and 100% bootstrap value in NJ and MP respectively and 1.00 posterior probability in Bayesian trees). The East coast population was separated from other Peninsula populations (64% in NJ, 66% in MP and 0.53 posterior probability in Bayesian). West coast populations were divided into 2 clades: the North-South (47%/54% in NJ, 26/26% in MP and 1.00/0.80 posterior probability in Bayesian) and Island-Mainland (93% in NJ, 90% in MP and 1.00 posterior probability in Bayesian). The results confirm the previous morphological assignment of 2 subspecies, M. f. fascicularis and M. f. argentimembris, in the Malay Peninsula. These populations should be treated as separate genetic entities in order to conserve the genetic diversity of Malaysia's M. fascicularis. These findings are crucial in aiding the conservation management and translocation process of M. fascicularis populations in Malaysia.
A Bayesian Method for Evaluating and Discovering Disease Loci Associations
Jiang, Xia; Barmada, M. Michael; Cooper, Gregory F.; Becich, Michael J.
2011-01-01
Background A genome-wide association study (GWAS) typically involves examining representative SNPs in individuals from some population. A GWAS data set can concern a million SNPs and may soon concern billions. Researchers investigate the association of each SNP individually with a disease, and it is becoming increasingly commonplace to also analyze multi-SNP associations. Techniques for handling so many hypotheses include the Bonferroni correction and recently developed Bayesian methods. These methods can encounter problems. Most importantly, they are not applicable to a complex multi-locus hypothesis which has several competing hypotheses rather than only a null hypothesis. A method that computes the posterior probability of complex hypotheses is a pressing need. Methodology/Findings We introduce the Bayesian network posterior probability (BNPP) method which addresses the difficulties. The method represents the relationship between a disease and SNPs using a directed acyclic graph (DAG) model, and computes the likelihood of such models using a Bayesian network scoring criterion. The posterior probability of a hypothesis is computed based on the likelihoods of all competing hypotheses. The BNPP can not only be used to evaluate a hypothesis that has previously been discovered or suspected, but also to discover new disease loci associations. The results of experiments using simulated and real data sets are presented. Our results concerning simulated data sets indicate that the BNPP exhibits both better evaluation and discovery performance than does a p-value based method. For the real data sets, previous findings in the literature are confirmed and additional findings are found. Conclusions/Significance We conclude that the BNPP resolves a pressing problem by providing a way to compute the posterior probability of complex multi-locus hypotheses. A researcher can use the BNPP to determine the expected utility of investigating a hypothesis further. Furthermore, we conclude that the BNPP is a promising method for discovering disease loci associations. PMID:21853025
Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR)
NASA Astrophysics Data System (ADS)
Peters, Christina; Malz, Alex; Hlozek, Renée
2018-01-01
The Bayesian Estimation Applied to Multiple Species (BEAMS) framework employs probabilistic supernova type classifications to do photometric SN cosmology. This work extends BEAMS to replace high-confidence spectroscopic redshifts with photometric redshift probability density functions, a capability that will be essential in the era the Large Synoptic Survey Telescope and other next-generation photometric surveys where it will not be possible to perform spectroscopic follow up on every SN. We present the Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR) Bayesian hierarchical model for constraining the cosmological parameters from photometric lightcurves and host galaxy photometry, which includes selection effects and is extensible to uncertainty in the redshift-dependent supernova type proportions. We create a pair of realistic mock catalogs of joint posteriors over supernova type, redshift, and distance modulus informed by photometric supernova lightcurves and over redshift from simulated host galaxy photometry. We perform inference under our model to obtain a joint posterior probability distribution over the cosmological parameters and compare our results with other methods, namely: a spectroscopic subset, a subset of high probability photometrically classified supernovae, and reducing the photometric redshift probability to a single measurement and error bar.
ERIC Educational Resources Information Center
Sueiro, Manuel J.; Abad, Francisco J.
2011-01-01
The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…
Nested Sampling for Bayesian Model Comparison in the Context of Salmonella Disease Dynamics
Dybowski, Richard; McKinley, Trevelyan J.; Mastroeni, Pietro; Restif, Olivier
2013-01-01
Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike's Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered. PMID:24376528
Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations
NASA Astrophysics Data System (ADS)
Sandhu, Rimple; Poirel, Dominique; Pettit, Chris; Khalil, Mohammad; Sarkar, Abhijit
2016-07-01
A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid-structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic system leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib-Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.
Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandhu, Rimple; Poirel, Dominique; Pettit, Chris
2016-07-01
A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid–structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic systemmore » leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib–Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.« less
Elastic K-means using posterior probability
Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris
2017-01-01
The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model. PMID:29240756
Stan : A Probabilistic Programming Language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
Stan : A Probabilistic Programming Language
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...
2017-01-01
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
NASA Astrophysics Data System (ADS)
Zhang, D.; Liao, Q.
2016-12-01
The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of computational efficiency.
Estimation from incomplete multinomial data. Ph.D. Thesis - Harvard Univ.
NASA Technical Reports Server (NTRS)
Credeur, K. R.
1978-01-01
The vector of multinomial cell probabilities was estimated from incomplete data, incomplete in that it contains partially classified observations. Each such partially classified observation was observed to fall in one of two or more selected categories but was not classified further into a single category. The data were assumed to be incomplete at random. The estimation criterion was minimization of risk for quadratic loss. The estimators were the classical maximum likelihood estimate, the Bayesian posterior mode, and the posterior mean. An approximation was developed for the posterior mean. The Dirichlet, the conjugate prior for the multinomial distribution, was assumed for the prior distribution.
Little Bayesians or Little Einsteins? Probability and Explanatory Virtue in Children's Inferences
ERIC Educational Resources Information Center
Johnston, Angie M.; Johnson, Samuel G. B.; Koven, Marissa L.; Keil, Frank C.
2017-01-01
Like scientists, children seek ways to explain causal systems in the world. But are children scientists in the strict Bayesian tradition of maximizing posterior probability? Or do they attend to other explanatory considerations, as laypeople and scientists--such as Einstein--do? Four experiments support the latter possibility. In particular, we…
Creation of the BMA ensemble for SST using a parallel processing technique
NASA Astrophysics Data System (ADS)
Kim, Kwangjin; Lee, Yang Won
2013-10-01
Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.
Comparison of sampling techniques for Bayesian parameter estimation
NASA Astrophysics Data System (ADS)
Allison, Rupert; Dunkley, Joanna
2014-02-01
The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.
Efficient Posterior Probability Mapping Using Savage-Dickey Ratios
Penny, William D.; Ridgway, Gerard R.
2013-01-01
Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner. PMID:23533640
NASA Astrophysics Data System (ADS)
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
Chmielewska, Daria; Stania, Magdalena; Słomka, Kajetan; Błaszczak, Edward; Taradaj, Jakub; Dolibog, Patrycja; Juras, Grzegorz
2017-11-01
This case-control study was designed to compare static postural stability between women with stress urinary incontinence and continent women and it was hypothesized that women with incontinence aged around 50 years also have balance disorders. Eighteen women with incontinence and twelve women without incontinence aged 50-55 years participated in two 60-s trials of each of four different testing conditions: eyes open/full bladder, eyes open/empty bladder, eyes closed/full bladder, eyes closed/empty bladder. The center of foot pressure (COP): sway range, root mean square, velocity (in the antero-posterior and medio-lateral directions), and COP area were recorded. The stabilograms were decomposed into rambling and trembling components. The groups of women with and without incontinence differed during the full bladder condition in antero-posterior COP sway range, COP area, and rambling trajectory (range in the antero-posterior and medio-lateral directions, root mean square in the antero-posterior and medio-lateral directions and velocity in the antero-posterior direction). The women with incontinence had more difficulty controlling their postural balance than continent women while standing with a full bladder. Therefore, developing therapeutic management focused on strengthening the women's core muscles and improving their postural balance seems advisable. © 2017 Wiley Periodicals, Inc.
Bayesian analysis of the astrobiological implications of life’s early emergence on Earth
Spiegel, David S.; Turner, Edwin L.
2012-01-01
Life arose on Earth sometime in the first few hundred million years after the young planet had cooled to the point that it could support water-based organisms on its surface. The early emergence of life on Earth has been taken as evidence that the probability of abiogenesis is high, if starting from young Earth-like conditions. We revisit this argument quantitatively in a Bayesian statistical framework. By constructing a simple model of the probability of abiogenesis, we calculate a Bayesian estimate of its posterior probability, given the data that life emerged fairly early in Earth’s history and that, billions of years later, curious creatures noted this fact and considered its implications. We find that, given only this very limited empirical information, the choice of Bayesian prior for the abiogenesis probability parameter has a dominant influence on the computed posterior probability. Although terrestrial life's early emergence provides evidence that life might be abundant in the universe if early-Earth-like conditions are common, the evidence is inconclusive and indeed is consistent with an arbitrarily low intrinsic probability of abiogenesis for plausible uninformative priors. Finding a single case of life arising independently of our lineage (on Earth, elsewhere in the solar system, or on an extrasolar planet) would provide much stronger evidence that abiogenesis is not extremely rare in the universe. PMID:22198766
Bayesian analysis of the astrobiological implications of life's early emergence on Earth.
Spiegel, David S; Turner, Edwin L
2012-01-10
Life arose on Earth sometime in the first few hundred million years after the young planet had cooled to the point that it could support water-based organisms on its surface. The early emergence of life on Earth has been taken as evidence that the probability of abiogenesis is high, if starting from young Earth-like conditions. We revisit this argument quantitatively in a bayesian statistical framework. By constructing a simple model of the probability of abiogenesis, we calculate a bayesian estimate of its posterior probability, given the data that life emerged fairly early in Earth's history and that, billions of years later, curious creatures noted this fact and considered its implications. We find that, given only this very limited empirical information, the choice of bayesian prior for the abiogenesis probability parameter has a dominant influence on the computed posterior probability. Although terrestrial life's early emergence provides evidence that life might be abundant in the universe if early-Earth-like conditions are common, the evidence is inconclusive and indeed is consistent with an arbitrarily low intrinsic probability of abiogenesis for plausible uninformative priors. Finding a single case of life arising independently of our lineage (on Earth, elsewhere in the solar system, or on an extrasolar planet) would provide much stronger evidence that abiogenesis is not extremely rare in the universe.
Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen
2017-12-27
Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP effects (SNP-BLUP model). When reducing marker density from WGS data to 30 K, SNP-BLUP tended to yield the highest accuracies, at least in the short term. Based on SVD of the genotype matrix, we developed a direct method for the calculation of BayesC estimates of marker effects. Although SVD- and MCMC-based marker effects differed slightly, their prediction accuracies were similar. Assuming that the SVD of the marker genotype matrix is already performed for other reasons (e.g. for SNP-BLUP), computation times for the BayesC predictions were comparable to those of SNP-BLUP.
The known unknowns: neural representation of second-order uncertainty, and ambiguity
Bach, Dominik R.; Hulme, Oliver; Penny, William D.; Dolan, Raymond J.
2011-01-01
Predictions provided by action-outcome probabilities entail a degree of (first-order) uncertainty. However, these probabilities themselves can be imprecise and embody second-order uncertainty. Tracking second-order uncertainty is important for optimal decision making and reinforcement learning. Previous functional magnetic resonance imaging investigations of second-order uncertainty in humans have drawn on an economic concept of ambiguity, where action-outcome associations in a gamble are either known (unambiguous) or completely unknown (ambiguous). Here, we relaxed the constraints associated with a purely categorical concept of ambiguity and varied the second-order uncertainty of gambles continuously, quantified as entropy over second-order probabilities. We show that second-order uncertainty influences decisions in a pessimistic way by biasing second-order probabilities, and that second-order uncertainty is negatively correlated with posterior cingulate cortex activity. The category of ambiguous (compared to non-ambiguous) gambles also biased choice in a similar direction, but was associated with distinct activation of a posterior parietal cortical area; an activation that we show reflects a different computational mechanism. Our findings indicate that behavioural and neural responses to second-order uncertainty are distinct from those associated with ambiguity and may call for a reappraisal of previous data. PMID:21451019
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
Congenital brainstem disconnection associated with a syrinx of the brainstem.
Barth, P G; de Vries, L S; Nikkels, P G J; Troost, D
2008-02-01
We report a case of congenital brainstem disconnection including the second detailed autopsy. A full-term newborn presented with irreversible apnoea and died on the fifth day. MRI revealed disconnection of the brainstem. The autopsy included a series of transverse sections of the mesencephalon, medulla oblongata and bridging tissue fragments. A fragile tube walled by mature brainstem tissue could be reconstructed. It enveloped a cylinder of fluid within the ventral pons extending to the mesencephalon and the lower brainstem. The aqueduct was patent and outside the lesion. The basilar artery was represented by a tiny median vessel. The ventral and lateral parts of the posterior brainstem were surrounded by heterotopic glial tissue. The olivary nucleus was absent and the cerebellar dentate nucleus was dysplastic. Considering the maturity of the remaining parts of the pons, the onset of structural decline is likely to be close to the time of birth. Probable causes are progressively insufficient perfusion through an hypoplastic basilar artery, and obstructed venous drainage through an abnormal glial barrier surrounding the posterior brainstem. The morphological findings can be characterized as a syrinx, known from disorders in which brainstem or spinal cord are damaged by a combination of mechanical and circulatory factors.
Dettmer, Jan; Dosso, Stan E; Holland, Charles W
2008-03-01
This paper develops a joint time/frequency-domain inversion for high-resolution single-bounce reflection data, with the potential to resolve fine-scale profiles of sediment velocity, density, and attenuation over small seafloor footprints (approximately 100 m). The approach utilizes sequential Bayesian inversion of time- and frequency-domain reflection data, employing ray-tracing inversion for reflection travel times and a layer-packet stripping method for spherical-wave reflection-coefficient inversion. Posterior credibility intervals from the travel-time inversion are passed on as prior information to the reflection-coefficient inversion. Within the reflection-coefficient inversion, parameter information is passed from one layer packet inversion to the next in terms of marginal probability distributions rotated into principal components, providing an efficient approach to (partially) account for multi-dimensional parameter correlations with one-dimensional, numerical distributions. Quantitative geoacoustic parameter uncertainties are provided by a nonlinear Gibbs sampling approach employing full data error covariance estimation (including nonstationary effects) and accounting for possible biases in travel-time picks. Posterior examination of data residuals shows the importance of including data covariance estimates in the inversion. The joint inversion is applied to data collected on the Malta Plateau during the SCARAB98 experiment.
Aerosol-type retrieval and uncertainty quantification from OMI data
NASA Astrophysics Data System (ADS)
Kauppi, Anu; Kolmonen, Pekka; Laine, Marko; Tamminen, Johanna
2017-11-01
We discuss uncertainty quantification for aerosol-type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses precalculated aerosol microphysical models stored in look-up tables (LUTs) and top-of-atmosphere (TOA) spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modelled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval.This paper focuses on the aerosol microphysical model selection and characterisation of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD). The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach, in which all uncertainties are described as a posterior probability distribution. When there is no single best-matching aerosol microphysical model, we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best-fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best-matching models of a certain main aerosol type in order to quantify how plausible it is that it represents the underlying atmospheric aerosol conditions.The developed method is applied to Ozone Monitoring Instrument (OMI) measurements using a multiwavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate the robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the difficulty in model selection. The posterior probability distribution can provide a comprehensive characterisation of the uncertainty in this kind of problem for aerosol-type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.
Seismic imaging of Q structures by a trans-dimensional coda-wave analysis
NASA Astrophysics Data System (ADS)
Takahashi, Tsutomu
2017-04-01
Wave scattering and intrinsic attenuation are important processes to describe incoherent and complex wave trains of high frequency seismic wave (>1Hz). The multiple lapse time window analysis (MLTWA) has been used to estimate scattering and intrinsic Q values by assuming constant Q in a study area (e.g., Hoshiba 1993). This study generalizes this MLTWA to estimate lateral variations of Q values under the Bayesian framework in dimension variable space. Study area is partitioned into small areas by means of the Voronoi tessellation. Scattering and intrinsic Q in each small area are constant. We define a misfit function for spatiotemporal variations of wave energy as with the original MLTWA, and maximize the posterior probability with changing not only Q values but the number and spatial layout of the Voronoi cells. This maximization is conducted by means of the reversible jump Markov chain Monte Carlo (rjMCMC) (Green 1995) since the number of unknown parameters (i.e., dimension of posterior probability) is variable. After a convergence to the maximum posterior, we estimate Q structures from the ensemble averages of MCMC samples around the maximum posterior probability. Synthetic tests showed stable reconstructions of input structures with reasonable error distributions. We applied this method for seismic waveform data recorded by ocean bottom seismograms at the outer-rise area off Tohoku, and estimated Q values at 4-8Hz, 8-16Hz and 16-32Hz. Intrinsic Q are nearly constant at all frequency bands, and scattering Q shows two distinct strong scattering regions at petit spot area and high seismicity area. These strong scattering are probably related to magma inclusions and fractured structure, respectively. Difference between these two areas becomes clear at high frequencies. It means that scale dependences of inhomogeneities or smaller scale inhomogeneity is important to discuss medium property and origins of structural variations. While the generalized MLTWA is based on a classical waveform modeling in constant Q medium, this method can be a fundamental basis for Q structure imaging in the crust.
PARTS: Probabilistic Alignment for RNA joinT Secondary structure prediction
Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H.
2008-01-01
A novel method is presented for joint prediction of alignment and common secondary structures of two RNA sequences. The joint consideration of common secondary structures and alignment is accomplished by structural alignment over a search space defined by the newly introduced motif called matched helical regions. The matched helical region formulation generalizes previously employed constraints for structural alignment and thereby better accommodates the structural variability within RNA families. A probabilistic model based on pseudo free energies obtained from precomputed base pairing and alignment probabilities is utilized for scoring structural alignments. Maximum a posteriori (MAP) common secondary structures, sequence alignment and joint posterior probabilities of base pairing are obtained from the model via a dynamic programming algorithm called PARTS. The advantage of the more general structural alignment of PARTS is seen in secondary structure predictions for the RNase P family. For this family, the PARTS MAP predictions of secondary structures and alignment perform significantly better than prior methods that utilize a more restrictive structural alignment model. For the tRNA and 5S rRNA families, the richer structural alignment model of PARTS does not offer a benefit and the method therefore performs comparably with existing alternatives. For all RNA families studied, the posterior probability estimates obtained from PARTS offer an improvement over posterior probability estimates from a single sequence prediction. When considering the base pairings predicted over a threshold value of confidence, the combination of sensitivity and positive predictive value is superior for PARTS than for the single sequence prediction. PARTS source code is available for download under the GNU public license at http://rna.urmc.rochester.edu. PMID:18304945
Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign
2007-01-01
Background Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds) that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for pairwise RNA structure prediction methods in a principled fashion. These constraints can reduce the computational and memory requirements of these methods while maintaining or improving their accuracy of structural prediction. This extends the practical reach of these methods to longer length sequences. The revised Dynalign code is freely available for download. PMID:17445273
Gómez Toledo, Verónica; Gutiérrez Farfán, Ileana; Verduzco-Mendoza, Antonio; Arch-Tirado, Emilio
Tinnitus is defined as the conscious perception of a sensation of sound that occurs in the absence of an external stimulus. This audiological symptom affects 7% to 19% of the adult population. The aim of this study is to describe the associated comorbidities present in patients with tinnitus usingjoint and conditional probability analysis. Patients of both genders, diagnosed with unilateral or bilateral tinnitus, aged between 20 and 45 years, and had a full computerised medical record, were selected. Study groups were formed on the basis of the following clinical aspects: 1) audiological findings; 2) vestibular findings; 3) comorbidities such as, temporomandibular dysfunction, tubal dysfunction, otosclerosis and, 4) triggering factors of tinnitus noise exposure, respiratory tract infection, use of ototoxic and/or drugs. Of the patients with tinnitus, 27 (65%) reported hearing loss, 11 (26.19%) temporomandibular dysfunction, and 11 (26.19%) with vestibular disorders. When performing the joint probability analysis, it was found that the probability that a patient with tinnitus having hearing loss was 2742 0.65, and 2042 0.47 for bilateral type. The result for P (A ∩ B)=30%. Bayes' theorem P (AiB) = P(Ai∩B)P(B) was used, and various probabilities were calculated. Therefore, in patients with temporomandibulardysfunction and vestibular disorders, a posterior probability of P (Aі/B)=31.44% was calculated. Consideration should be given to the joint and conditional probability approach as tools for the study of different pathologies. Copyright © 2016 Academia Mexicana de Cirugía A.C. Publicado por Masson Doyma México S.A. All rights reserved.
NASA Astrophysics Data System (ADS)
Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim
2017-08-01
Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and uncertainties of Manning's n coefficients compared to the full posterior distributions inferred by MCMC.
Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-04-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.
Inferring probabilistic stellar rotation periods using Gaussian processes
NASA Astrophysics Data System (ADS)
Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh
2018-02-01
Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.
A Bayesian Framework of Uncertainties Integration in 3D Geological Model
NASA Astrophysics Data System (ADS)
Liang, D.; Liu, X.
2017-12-01
3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.
Shi, Haolun; Yin, Guosheng
2018-02-21
Simon's two-stage design is one of the most commonly used methods in phase II clinical trials with binary endpoints. The design tests the null hypothesis that the response rate is less than an uninteresting level, versus the alternative hypothesis that the response rate is greater than a desirable target level. From a Bayesian perspective, we compute the posterior probabilities of the null and alternative hypotheses given that a promising result is declared in Simon's design. Our study reveals that because the frequentist hypothesis testing framework places its focus on the null hypothesis, a potentially efficacious treatment identified by rejecting the null under Simon's design could have only less than 10% posterior probability of attaining the desirable target level. Due to the indifference region between the null and alternative, rejecting the null does not necessarily mean that the drug achieves the desirable response level. To clarify such ambiguity, we propose a Bayesian enhancement two-stage (BET) design, which guarantees a high posterior probability of the response rate reaching the target level, while allowing for early termination and sample size saving in case that the drug's response rate is smaller than the clinically uninteresting level. Moreover, the BET design can be naturally adapted to accommodate survival endpoints. We conduct extensive simulation studies to examine the empirical performance of our design and present two trial examples as applications. © 2018, The International Biometric Society.
Reversible posterior leucoencephalopathy syndrome associated with bone marrow transplantation.
Teive, H A; Brandi, I V; Camargo, C H; Bittencourt, M A; Bonfim, C M; Friedrich, M L; de Medeiros, C R; Werneck, L C; Pasquini, R
2001-09-01
Reversible posterior leucoencephalopathy syndrome (RPLS) has previously been described in patients who have renal insufficiency, eclampsia, hypertensive encephalopathy and patients receiving immunosuppressive therapy. The mechanism by which immunosuppressive agents can cause this syndrome is not clear, but it is probably related with cytotoxic effects of these agents on the vascular endothelium. We report eight patients who received cyclosporine A (CSA) after allogeneic bone marrow transplantation or as treatment for severe aplastic anemia (SSA) who developed posterior leucoencephalopathy. The most common signs and symptoms were seizures and headache. Neurological dysfunction occurred preceded by or concomitant with high blood pressure and some degree of acute renal failure in six patients. Computerized tomography studies showed low-density white matter lesions involving the posterior areas of cerebral hemispheres. Symptoms and neuroimaging abnormalities were reversible and improvement occurred in all patients when given lower doses of CSA or when the drug was withdrawn. RPLS may be considered an expression of CSA neurotoxicity.
Posterior semicircular canal dehiscence: value of VEMP and multidetector CT.
Vanspauwen, R; Salembier, L; Van den Hauwe, L; Parizel, P; Wuyts, F L; Van de Heyning, P H
2006-01-01
To illustrate that posterior semicircular canal dehiscence can present similarly to superior semicircular canal dehiscence. The symptomatology initially presented as probable Menière's disease evolving into a mixed conductive hearing loss with a Carhart notch-type perceptive component suggestive of otosclerosis-type stapes fixation. A small hole stapedotomy resulted in a dead ear and a horizontal semicircular canal hypofunction. Recurrent incapacitating vertigo attacks developed. Vestibular evoked myogenic potential (VEMP) testing demonstrated intact vestibulocollic reflexes. Additional evaluation with high resolution multidetector computed tomography (MDCT) of the temporal bone showed a dehiscence of the left posterior semicircular canal. Besides superior semicircular canal dehiscence, posterior semicircular canal dehiscence has to be included in the differential diagnosis of atypical Menière's disease and/or low tone conductive hearing loss. The value of performing MDCT before otosclerosis-type surgery is stressed. VEMP might contribute to establishing the differential diagnosis.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
Kinematic analysis of anterior cruciate ligament reconstruction in total knee arthroplasty
Liu, Hua-Wei; Ni, Ming; Zhang, Guo-Qiang; Li, Xiang; Chen, Hui; Zhang, Qiang; Chai, Wei; Zhou, Yong-Gang; Chen, Ji-Ying; Liu, Yu-Liang; Cheng, Cheng-Kung; Wang, Yan
2016-01-01
Background: This study aims to retain normal knee kinematics after knee replacement surgeries by reconstructing anterior cruciate ligament during total knee arthroplasty. Method: We use computational simulation tools to establish four dynamic knee models, including normal knee model, posterior cruciate ligament retaining knee model, posterior cruciate ligament substituting knee model, and anterior cruciate ligament reconstructing knee model. Our proposed method utilizes magnetic resonance images to reconstruct solid bones and attachments of ligaments, and assemble femoral and tibial components according representative literatures and operational specifications. Dynamic data of axial tibial rotation and femoral translation from full-extension to 135 were measured for analyzing the motion of knee models. Findings: The computational simulation results show that comparing with the posterior cruciate ligament retained knee model and the posterior cruciate ligament substituted knee model, reconstructing anterior cruciate ligament improves the posterior movement of the lateral condyle, medial condyle and tibial internal rotation through a full range of flexion. The maximum posterior translations of the lateral condyle, medial condyle and tibial internal rotation of the anterior cruciate ligament reconstructed knee are 15.3 mm, 4.6 mm and 20.6 at 135 of flexion. Interpretation: Reconstructing anterior cruciate ligament in total knee arthroplasty has been approved to be an more efficient way of maintaining normal knee kinematics comparing to posterior cruciate ligament retained and posterior cruciate ligament substituted total knee arthroplasty. PMID:27347334
Pierce, Jordan E; McDowell, Jennifer E
2016-02-01
Cognitive control supports flexible behavior adapted to meet current goals and can be modeled through investigation of saccade tasks with varying cognitive demands. Basic prosaccades (rapid glances toward a newly appearing stimulus) are supported by neural circuitry, including occipital and posterior parietal cortex, frontal and supplementary eye fields, and basal ganglia. These trials can be contrasted with complex antisaccades (glances toward the mirror image location of a stimulus), which are characterized by greater functional magnetic resonance imaging (MRI) blood oxygenation level-dependent (BOLD) signal in the aforementioned regions and recruitment of additional regions such as dorsolateral prefrontal cortex. The current study manipulated the cognitive demands of these saccade tasks by presenting three rapid event-related runs of mixed saccades with a varying probability of antisaccade vs. prosaccade trials (25, 50, or 75%). Behavioral results showed an effect of trial-type probability on reaction time, with slower responses in runs with a high antisaccade probability. Imaging results exhibited an effect of probability in bilateral pre- and postcentral gyrus, bilateral superior temporal gyrus, and medial frontal gyrus. Additionally, the interaction between saccade trial type and probability revealed a strong probability effect for prosaccade trials, showing a linear increase in activation parallel to antisaccade probability in bilateral temporal/occipital, posterior parietal, medial frontal, and lateral prefrontal cortex. In contrast, antisaccade trials showed elevated activation across all runs. Overall, this study demonstrated that improbable performance of a typically simple prosaccade task led to augmented BOLD signal to support changing cognitive control demands, resulting in activation levels similar to the more complex antisaccade task. Copyright © 2016 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Sheng, Zheng
2013-02-01
The estimation of lower atmospheric refractivity from radar sea clutter (RFC) is a complicated nonlinear optimization problem. This paper deals with the RFC problem in a Bayesian framework. It uses the unbiased Markov Chain Monte Carlo (MCMC) sampling technique, which can provide accurate posterior probability distributions of the estimated refractivity parameters by using an electromagnetic split-step fast Fourier transform terrain parabolic equation propagation model within a Bayesian inversion framework. In contrast to the global optimization algorithm, the Bayesian—MCMC can obtain not only the approximate solutions, but also the probability distributions of the solutions, that is, uncertainty analyses of solutions. The Bayesian—MCMC algorithm is implemented on the simulation radar sea-clutter data and the real radar sea-clutter data. Reference data are assumed to be simulation data and refractivity profiles are obtained using a helicopter. The inversion algorithm is assessed (i) by comparing the estimated refractivity profiles from the assumed simulation and the helicopter sounding data; (ii) the one-dimensional (1D) and two-dimensional (2D) posterior probability distribution of solutions.
Manual rotation to decrease operative delivery in posterior or transverse positions.
Le Ray, Camille; Deneux-Tharaux, Catherine; Khireddine, Imane; Dreyfus, Michel; Vardon, Delphine; Goffinet, François
2013-09-01
To assess the effect of a policy of manual rotation on the mode of delivery of fetuses in posterior or transverse positions at full dilatation. This was a prospective study to compare two policies of management for posterior and transverse positions in two different hospitals (Hospital 1: no manual rotation and Hospital 2: manual rotation). We used univariable and multivariable analyses to study the association between the management policy for posterior and transverse positions at full dilatation in these hospitals and maternal and neonatal outcomes. The principal end point was operative delivery (ie, cesarean or instrumental vaginal delivery). All factors associated with the risk of operative delivery in the univariable analysis (P<.1) were included in the logistic regression models. We then specifically studied whether manual rotation was independently associated with a reduction in operative deliveries. The rate of posterior or transverse positions at full dilatation was 15.9% (n=111) in Hospital 1 and 15.3% (n=220) in Hospital 2 (P=.75). Of the 172 attempts of manual rotation in Hospital 2, 155 (90.1%) were successful. The rate of operative delivery was significantly lower in Hospital 2, which performed manual rotations (23.2% compared with 38.7% in Hospital 1, adjusted odds ratio [OR] 0.52, 95% confidence interval [CI] 0.28-0.95). After multivariable analysis, manual rotation remained significantly associated with a reduction in the risk of operative delivery (adjusted OR 0.45, 95% CI 0.25-0.85). Five-minute Apgar score and arterial pH at birth were similar in the two hospitals. For fetuses in posterior or transverse positions at full dilatation, a strategy of manual rotation is associated with a reduction in the rate of operative delivery. III.
Downregulation of the posterior medial frontal cortex prevents social conformity.
Klucharev, Vasily; Munneke, Moniek A M; Smidts, Ale; Fernández, Guillén
2011-08-17
We often change our behavior to conform to real or imagined group pressure. Social influence on our behavior has been extensively studied in social psychology, but its neural mechanisms have remained largely unknown. Here we demonstrate that the transient downregulation of the posterior medial frontal cortex by theta-burst transcranial magnetic stimulation reduces conformity, as indicated by reduced conformal adjustments in line with group opinion. Both the extent and probability of conformal behavioral adjustments decreased significantly relative to a sham and a control stimulation over another brain area. The posterior part of the medial frontal cortex has previously been implicated in behavioral and attitudinal adjustments. Here, we provide the first interventional evidence of its critical role in social influence on human behavior.
Spectral likelihood expansions for Bayesian inference
NASA Astrophysics Data System (ADS)
Nagel, Joseph B.; Sudret, Bruno
2016-03-01
A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.
Phan, Thanh G; Fong, Ashley C; Donnan, Geoffrey; Reutens, David C
2007-06-01
Knowledge of the extent and distribution of infarcts of the posterior cerebral artery (PCA) may give insight into the limits of the arterial territory and infarct mechanism. We describe the creation of a digital atlas of PCA infarcts associated with PCA branch and trunk occlusion by magnetic resonance imaging techniques. Infarcts were manually segmented on T(2)-weighted magnetic resonance images obtained >24 hours after stroke onset. The images were linearly registered into a common stereotaxic coordinate space. The segmented images were averaged to yield the probability of involvement by infarction at each voxel. Comparisons were made with existing maps of the PCA territory. Thirty patients with a median age of 61 years (range, 22 to 86 years) were studied. In the digital atlas of the PCA, the highest frequency of infarction was within the medial temporal lobe and lingual gyrus (probability=0.60 to 0.70). The mean and maximal PCA infarct volumes were 55.1 and 128.9 cm(3), respectively. Comparison with published maps showed greater agreement in the anterior and medial boundaries of the PCA territory compared with its posterior and lateral boundaries. We have created a probabilistic digital atlas of the PCA based on subacute magnetic resonance scans. This approach is useful for establishing the spatial distribution of strokes in a given cerebral arterial territory and determining the regions within the arterial territory that are at greatest risk of infarction.
Iterative updating of model error for Bayesian inversion
NASA Astrophysics Data System (ADS)
Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew
2018-02-01
In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.
Bayesian functional integral method for inferring continuous data from discrete measurements.
Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul
2012-02-08
Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.
2015-01-01
In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152
NASA Astrophysics Data System (ADS)
Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.
2016-12-01
Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.
Schubert, Michael; Yu, Jr-Kai; Holland, Nicholas D; Escriva, Hector; Laudet, Vincent; Holland, Linda Z
2005-01-01
In the invertebrate chordate amphioxus, as in vertebrates, retinoic acid (RA) specifies position along the anterior/posterior axis with elevated RA signaling in the middle third of the endoderm setting the posterior limit of the pharynx. Here we show that AmphiHox1 is also expressed in the middle third of the developing amphioxus endoderm and is activated by RA signaling. Knockdown of AmphiHox1 function with an antisense morpholino oligonucleotide shows that AmphiHox1 mediates the role of RA signaling in setting the posterior limit of the pharynx by repressing expression of pharyngeal markers in the posterior foregut/midgut endoderm. The spatiotemporal expression of these endodermal genes in embryos treated with RA or the RA antagonist BMS009 indicates that Pax1/9, Pitx and Notch are probably more upstream than Otx and Nodal in the hierarchy of genes repressed by RA signaling. This work highlights the potential of amphioxus, a genomically simple, vertebrate-like invertebrate chordate, as a paradigm for understanding gene hierarchies similar to the more complex ones of vertebrates.
Turgut, Burak; Türkçüoğlu, Peykan; Deniz, Nurettin; Catak, Onur
2008-12-01
To report annular and central heavy pigment deposition on the posterior lens capsule in a case of pigment dispersion syndrome. Case report. A 36-year-old female with bilateral pigment dispersion syndrome presented with progressive decrease in visual acuity in the right eye over the past 1-2 years. Clinical examination revealed the typical findings of pigment dispersion syndrome including bilateral Krunkenberg spindles, iris transillumination defects, and dense trabecular meshwork pigmentation. Remarkably, annular and central dense pigmentation of the posterior lens capsule was noted in the right eye. Annular pigment deposition on the posterior lens capsule may be a rare finding associated with pigment dispersion syndrome. Such a finding suggests that there may be aqueous flow into the retrolental space in some patients with this condition. The way of central pigmentation is the entrance of aqueous to Berger's space. In our case, it is probable that spontaneous detachment of the anterior hyaloid membrane aided this entrance.
Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model
Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos
2011-01-01
This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070
Joint segmentation and deformable registration of brain scans guided by a tumor growth model.
Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos
2011-01-01
This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.
Empty sella syndrome secondary to intrasellar cyst in adolescence.
Raiti, S; Albrink, M J; Maclaren, N K; Chadduck, W M; Gabriele, O F; Chou, S M
1976-09-01
A 15-year-old boy had growth failure and failure of sexual development. The probable onset was at age 10. Endocrine studies showed hypopituitarism with deficiency of growth hormone and follicle-stimulating hormone, an abnormal response to metyrapone, and deficiency of thyroid function. Luteinizing hormone level was in the low-normal range. Posterior pituitary function was normal. Roentgenogram showed a large sella with some destruction of the posterior clinoids. Transsphenoidal exploration was carried out. The sella was empty except for a whitish membrane; no pituitary tissue was seen. The sella was packed with muscle. Recovery was uneventful, and the patient was given replacement therapy. On histologic examination,the cyst wall showed low pseudostratified cuboidal epithelium and occasional squamous metaplasia. Hemosiderin-filled phagocytes and acinar structures were also seen. The diagnosis was probable rupture of an intrasellar epithelial cyst, leading to empty sella syndrome.
Log-Linear Models for Gene Association
Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.
2009-01-01
We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032
Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia
2017-10-01
A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.
Fisher, Charles K; Mehta, Pankaj
2015-06-01
Feature selection, identifying a subset of variables that are relevant for predicting a response, is an important and challenging component of many methods in statistics and machine learning. Feature selection is especially difficult and computationally intensive when the number of variables approaches or exceeds the number of samples, as is often the case for many genomic datasets. Here, we introduce a new approach--the Bayesian Ising Approximation (BIA)-to rapidly calculate posterior probabilities for feature relevance in L2 penalized linear regression. In the regime where the regression problem is strongly regularized by the prior, we show that computing the marginal posterior probabilities for features is equivalent to computing the magnetizations of an Ising model with weak couplings. Using a mean field approximation, we show it is possible to rapidly compute the feature selection path described by the posterior probabilities as a function of the L2 penalty. We present simulations and analytical results illustrating the accuracy of the BIA on some simple regression problems. Finally, we demonstrate the applicability of the BIA to high-dimensional regression by analyzing a gene expression dataset with nearly 30 000 features. These results also highlight the impact of correlations between features on Bayesian feature selection. An implementation of the BIA in C++, along with data for reproducing our gene expression analyses, are freely available at http://physics.bu.edu/∼pankajm/BIACode. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
Yoshihara, Hiroyuki
2014-07-01
Numerous surgical procedures and instrumentation techniques for lumbosacral fusion (LSF) have been developed. This is probably because of its high mechanical demand and unique anatomy. Surgical options include anterior column support (ACS) and posterior stabilization procedures. Biomechanical studies have been performed to verify the stability of those options. The options have their own advantage but also disadvantage aspects. This review article reports the surgical options for lumbosacral fusion, their biomechanical stability, advantages/disadvantages, and affecting factors in option selection. Review of literature. LSF has lots of options both for ACS and posterior stabilization procedures. Combination of posterior stabilization procedures is an option. Furthermore, combinations of ACS and posterior stabilization procedures are other options. It is difficult to make a recommendation or treatment algorithm of LSF from the current literature. However, it is important to know all aspects of the options and decision-making of surgical options for LSF needs to be tailored for each patient, considering factors such as biomechanical stress and osteoporosis.
Poster error probability in the Mu-11 Sequential Ranging System
NASA Technical Reports Server (NTRS)
Coyle, C. W.
1981-01-01
An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.
Al-Ali, Firas; Barrow, Tom; Duan, Li; Jefferson, Anne; Louis, Susan; Luke, Kim; Major, Kevin; Smoker, Sandy; Walker, Sarah; Yacobozzi, Margaret
2011-09-01
Although atherosclerotic plaque in the carotid and coronary arteries is accepted as a cause of ischemia, vertebral artery ostium (VAO) atherosclerotic plaque is not widely recognized as a source of ischemic stroke. We seek to demonstrate its implication in some posterior circulation ischemia. This is a nonrandomized, prospective, single-center registry on consecutive patients presenting with posterior circulation ischemia who underwent VAO stenting for significant atherosclerotic stenosis. Diagnostic evaluation and imaging studies determined the likelihood of this lesion as the symptom source (highly likely, probable, or highly unlikely). Patients were divided into 4 groups in decreasing order of severity of clinical presentation (ischemic stroke, TIA then stroke, TIA, asymptomatic), which were compared with the morphological and hemodynamic characteristics of the VAO plaque. Clinical follow-up 1 year after stenting assessed symptom recurrence. One hundred fourteen patients underwent stenting of 127 lesions; 35% of the lesions were highly likely the source of symptoms, 53% were probable, and 12% were highly unlikely. Clinical presentation correlated directly with plaque irregularity and presence of clot at the VAO, as did bilateral lesions and presence of tandem lesions. Symptom recurrence at 1 year was 2%. Thirty-five percent of the lesions were highly likely the source of the symptoms. A direct relationship between some morphological/hemodynamic characteristics and the severity of clinical presentation was also found. Finally, patients had a very low rate of symptom recurrence after treatment. These 3 observations point strongly to VAO plaque as a potential source of some posterior circulation stroke.
Photopoulos, Christos Demetris; ElAttrache, Neal S.; Doermann, Alex; Akeda, Masaki; McGarry, Michelle H.; Lee, Thay Q.
2017-01-01
Objectives: The rotator cuff cable has been postulated to be the primary load bearing substructure of the superior part of the rotator cuff. Tears of the posterior rotator cable are frequently seen in overhead throwing athletes. Although the biomechanical significance of the anterior rotator cable has been well described, our current understanding of the relevance of the posterior cable is limited. The purpose of this study was to examine how partial-thickness tears and full-thickness tears of the posterior rotator cable would alter glenohumeral biomechanics and kinematics in cadaveric shoulder models. Methods: Eight fresh-frozen cadaveric shoulder specimens were prepared and tested. To simulate the sequence of glenohumeral positions during the throwing motion, specimens were mounted on a custom shoulder testing system with the humerus positioned at 90° of abduction (30° scapular upward rotation, 60° glenohumeral abduction) and tested at 30, 60, 90, and 120 degrees of external rotation. After a circumferential capsulotomy was performed, rotator cuff muscles were loaded based on physiologic cross-sectional area ratios, and testing for each specimen was performed on each the following three conditions: intact posterior cable, partial-thickness (50%) articular-sided posterior cable tear, and full-thickness posterior cable tear. Primary outcome measures tested for each condition under the various degrees of glenohumeral rotation were: 1) anterior and total glenohumeral translation after application of a 30N anterior force; 2) path of glenohumeral articulation; 3) glenohumeral joint force. Results: With a 30N anterior force at 120° of external rotation, there was a significant increase in anterior glenohumeral translation between intact and full-thickness tear specimens (7.28±2.00mm and 17.49±3.75mm, respectively; p<0.05). Similarly, total joint translation at 120° of external rotation significantly increased between intact and full-thickness tear specimens (10.37±3.18mm and 23.37±5.05mm, respectively; p<0.05). No significant differences were apparent at other degrees of rotation (30, 60, 90 degrees), or with partial-thickness tears. Changes in the path of glenohumeral articulation were likewise most evident at 120° of external rotation, with a progressively anterior, inferior, and lateral shift in articulation with sequential sectioning of the posterior cable. Lastly, with regards to alterations in glenohumeral joint force, no significant changes were seen in any of the conditions. Conclusion: In this cadaveric shoulder model of the throwing shoulder, tears of the posterior rotator cuff cable lead to altered glenohumeral biomechanics and kinematics. These changes were most profound at 120° of external rotation, suggesting the importance of an intact posterior cable as a potential stabilizer during the late-cocking phase of throwing.
Hey, Jody; Nielsen, Rasmus
2007-01-01
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
Bayesian structural inference for hidden processes.
Strelioff, Christopher C; Crutchfield, James P
2014-04-01
We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ε-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ε-machines, irrespective of estimated transition probabilities. Properties of ε-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.
Bayesian structural inference for hidden processes
NASA Astrophysics Data System (ADS)
Strelioff, Christopher C.; Crutchfield, James P.
2014-04-01
We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.
Eom, Youngsub; Ryu, Dongok; Kim, Dae Wook; Yang, Seul Ki; Song, Jong Suk; Kim, Sug-Whan; Kim, Hyo Myung
2016-10-01
To evaluate the toric intraocular lens (IOL) calculation considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and effective lens position (ELP). Two thousand samples of corneal parameters with keratometric astigmatism ≥ 1.0 D were obtained using bootstrap methods. The probability distributions for incision-induced keratometric and posterior corneal astigmatisms, as well as ELP were estimated from the literature review. The predicted residual astigmatism error using method D with an IOL add power calculator (IAPC) was compared with those derived using methods A, B, and C through Monte-Carlo simulation. Method A considered the keratometric astigmatism and incision-induced keratometric astigmatism, method B considered posterior corneal astigmatism in addition to the A method, method C considered incision-induced posterior corneal astigmatism in addition to the B method, and method D considered ELP in addition to the C method. To verify the IAPC used in this study, the predicted toric IOL cylinder power and its axis using the IAPC were compared with ray-tracing simulation results. The median magnitude of the predicted residual astigmatism error using method D (0.25 diopters [D]) was smaller than that derived using methods A (0.42 D), B (0.38 D), and C (0.28 D) respectively. Linear regression analysis indicated that the predicted toric IOL cylinder power and its axis had excellent goodness-of-fit between the IAPC and ray-tracing simulation. The IAPC is a simple but accurate method for predicting the toric IOL cylinder power and its axis considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and ELP.
Deep convolutional networks for automated detection of posterior-element fractures on spine CT
NASA Astrophysics Data System (ADS)
Roth, Holger R.; Wang, Yinong; Yao, Jianhua; Lu, Le; Burns, Joseph E.; Summers, Ronald M.
2016-03-01
Injuries of the spine, and its posterior elements in particular, are a common occurrence in trauma patients, with potentially devastating consequences. Computer-aided detection (CADe) could assist in the detection and classification of spine fractures. Furthermore, CAD could help assess the stability and chronicity of fractures, as well as facilitate research into optimization of treatment paradigms. In this work, we apply deep convolutional networks (ConvNets) for the automated detection of posterior element fractures of the spine. First, the vertebra bodies of the spine with its posterior elements are segmented in spine CT using multi-atlas label fusion. Then, edge maps of the posterior elements are computed. These edge maps serve as candidate regions for predicting a set of probabilities for fractures along the image edges using ConvNets in a 2.5D fashion (three orthogonal patches in axial, coronal and sagittal planes). We explore three different methods for training the ConvNet using 2.5D patches along the edge maps of `positive', i.e. fractured posterior-elements and `negative', i.e. non-fractured elements. An experienced radiologist retrospectively marked the location of 55 displaced posterior-element fractures in 18 trauma patients. We randomly split the data into training and testing cases. In testing, we achieve an area-under-the-curve of 0.857. This corresponds to 71% or 81% sensitivities at 5 or 10 false-positives per patient, respectively. Analysis of our set of trauma patients demonstrates the feasibility of detecting posterior-element fractures in spine CT images using computer vision techniques such as deep convolutional networks.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
NASA Astrophysics Data System (ADS)
Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei
2013-08-01
develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less
Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.
Li, Yuhong; Jia, Fucang; Qin, Jing
2016-10-01
Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.
The effect of business improvement districts on the incidence of violent crimes
Golinelli, Daniela; Stokes, Robert J; Bluthenthal, Ricky
2010-01-01
Objective To examine whether business improvement districts (BID) contributed to greater than expected declines in the incidence of violent crimes in affected neighbourhoods. Method A Bayesian hierarchical model was used to assess the changes in the incidence of violent crimes between 1994 and 2005 and the implementation of 30 BID in Los Angeles neighbourhoods. Results The implementation of BID was associated with a 12% reduction in the incidence of robbery (95% posterior probability interval −2 to 24) and an 8% reduction in the total incidence of violent crimes (95% posterior probability interval −5 to 21). The strength of the effect of BID on robbery crimes varied by location. Conclusion These findings indicate that the implementation of BID can reduce the incidence of violent crimes likely to result in injury to individuals. The findings also indicate that the establishment of a BID by itself is not a panacea, and highlight the importance of targeting BID efforts to crime prevention interventions that reduce violence exposure associated with criminal behaviours. PMID:20587814
The effect of business improvement districts on the incidence of violent crimes.
MacDonald, John; Golinelli, Daniela; Stokes, Robert J; Bluthenthal, Ricky
2010-10-01
To examine whether business improvement districts (BID) contributed to greater than expected declines in the incidence of violent crimes in affected neighbourhoods. A Bayesian hierarchical model was used to assess the changes in the incidence of violent crimes between 1994 and 2005 and the implementation of 30 BID in Los Angeles neighbourhoods. The implementation of BID was associated with a 12% reduction in the incidence of robbery (95% posterior probability interval -2 to 24) and an 8% reduction in the total incidence of violent crimes (95% posterior probability interval -5 to 21). The strength of the effect of BID on robbery crimes varied by location. These findings indicate that the implementation of BID can reduce the incidence of violent crimes likely to result in injury to individuals. The findings also indicate that the establishment of a BID by itself is not a panacea, and highlight the importance of targeting BID efforts to crime prevention interventions that reduce violence exposure associated with criminal behaviours.
Realistic respiratory motion margins for external beam partial breast irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conroy, Leigh; Quirk, Sarah; Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4
Purpose: Respiratory margins for partial breast irradiation (PBI) have been largely based on geometric observations, which may overestimate the margin required for dosimetric coverage. In this study, dosimetric population-based respiratory margins and margin formulas for external beam partial breast irradiation are determined. Methods: Volunteer respiratory data and anterior–posterior (AP) dose profiles from clinical treatment plans of 28 3D conformal radiotherapy (3DCRT) PBI patient plans were used to determine population-based respiratory margins. The peak-to-peak amplitudes (A) of realistic respiratory motion data from healthy volunteers were scaled from A = 1 to 10 mm to create respiratory motion probability density functions. Dosemore » profiles were convolved with the respiratory probability density functions to produce blurred dose profiles accounting for respiratory motion. The required margins were found by measuring the distance between the simulated treatment and original dose profiles at the 95% isodose level. Results: The symmetric dosimetric respiratory margins to cover 90%, 95%, and 100% of the simulated treatment population were 1.5, 2, and 4 mm, respectively. With patient set up at end exhale, the required margins were larger in the anterior direction than the posterior. For respiratory amplitudes less than 5 mm, the population-based margins can be expressed as a fraction of the extent of respiratory motion. The derived formulas in the anterior/posterior directions for 90%, 95%, and 100% simulated population coverage were 0.45A/0.25A, 0.50A/0.30A, and 0.70A/0.40A. The differences in formulas for different population coverage criteria demonstrate that respiratory trace shape and baseline drift characteristics affect individual respiratory margins even for the same average peak-to-peak amplitude. Conclusions: A methodology for determining population-based respiratory margins using real respiratory motion patterns and dose profiles in the AP direction was described. It was found that the currently used respiratory margin of 5 mm in partial breast irradiation may be overly conservative for many 3DCRT PBI patients. Amplitude alone was found to be insufficient to determine patient-specific margins: individual respiratory trace shape and baseline drift both contributed to the dosimetric target coverage. With respiratory coaching, individualized respiratory margins smaller than the full extent of motion could reduce planning target volumes while ensuring adequate coverage under respiratory motion.« less
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
The utility of Bayesian predictive probabilities for interim monitoring of clinical trials
Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn
2014-01-01
Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363
McClelland, James L.
2013-01-01
This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered. PMID:23970868
McClelland, James L
2013-01-01
This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.
Progression of Brain Network Alterations in Cerebral Amyloid Angiopathy.
Reijmer, Yael D; Fotiadis, Panagiotis; Riley, Grace A; Xiong, Li; Charidimou, Andreas; Boulouis, Gregoire; Ayres, Alison M; Schwab, Kristin; Rosand, Jonathan; Gurol, M Edip; Viswanathan, Anand; Greenberg, Steven M
2016-10-01
We recently showed that cerebral amyloid angiopathy (CAA) is associated with functionally relevant brain network impairments, in particular affecting posterior white matter connections. Here we examined how these brain network impairments progress over time. Thirty-three patients with probable CAA underwent multimodal brain magnetic resonance imaging at 2 time points (mean follow-up time: 1.3±0.4 years). Brain networks of the hemisphere free of intracerebral hemorrhages were reconstructed using fiber tractography and graph theory. The global efficiency of the network and mean fractional anisotropies of posterior-posterior, frontal-frontal, and posterior-frontal network connections were calculated. Patients with moderate versus severe CAA were defined based on microbleed count, dichotomized at the median (median=35). Global efficiency of the intracerebral hemorrhage-free hemispheric network declined from baseline to follow-up (-0.008±0.003; P=0.029). The decline in global efficiency was most pronounced for patients with severe CAA (group×time interaction P=0.03). The decline in global network efficiency was associated with worse executive functioning (β=0.46; P=0.03). Examination of subgroups of network connections revealed a decline in fractional anisotropies of posterior-posterior connections at both levels of CAA severity (-0.006±0.002; P=0.017; group×time interaction P=0.16). The fractional anisotropies of posterior-frontal and frontal-frontal connections declined in patients with severe but not moderate CAA (group×time interaction P=0.007 and P=0.005). Associations were independent of change in white matter hyperintensity volume. Brain network impairment in patients with CAA worsens measurably over just 1.3-year follow-up and seem to progress from posterior to frontal connections with increasing disease severity. © 2016 American Heart Association, Inc.
Flores-Gutiérrez, Enrique O; Díaz, José-Luis; Barrios, Fernando A; Guevara, Miguel Angel; Del Río-Portilla, Yolanda; Corsi-Cabrera, María; Del Flores-Gutiérrez, Enrique O
2009-01-01
Potential sex differences in EEG coherent activity during pleasant and unpleasant musical emotions were investigated. Musical excerpts by Mahler, Bach, and Prodromidès were played to seven men and seven women and their subjective emotions were evaluated in relation to alpha band intracortical coherence. Different brain links in specific frequencies were associated to pleasant and unpleasant emotions. Pleasant emotions (Mahler, Bach) increased upper alpha couplings linking left anterior and posterior regions. Unpleasant emotions (Prodromidès) were sustained by posterior midline coherence exclusively in the right hemisphere in men and bilateral in women. Combined music induced bilateral oscillations among posterior sensory and predominantly left association areas in women. Consistent with their greater positive attributions to music, the coherent network is larger in women, both for musical emotion and for unspecific musical effects. Musical emotion entails specific coupling among cortical regions and involves coherent upper alpha activity between posterior association areas and frontal regions probably mediating emotional and perceptual integration. Linked regions by combined music suggest more working memory contribution in women and attention in men.
VizieR Online Data Catalog: Wide binaries in Tycho-Gaia: search method (Andrews+, 2017)
NASA Astrophysics Data System (ADS)
Andrews, J. J.; Chaname, J.; Agueros, M. A.
2017-11-01
Our catalogue of wide binaries identified in the Tycho-Gaia Astrometric Solution catalogue. The Gaia source IDs, Tycho IDs, astrometry, posterior probabilities for both the log-flat prior and power-law prior models, and angular separation are presented. (1 data file).
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
Chen, Qiuhong; Zheng, Yu; Li, Ye; Zeng, Ying; Kuang, Jianchao; Hou, Shixiang; Li, Xiaohui
2012-05-01
The aim of the present work was to evaluate the effect of deacetylated gellan gum on delivering hydrophilic drug to the posterior segment of the eye. An aesculin-containing in situ gel based on deacetylated gellan gum (AG) was prepared and characterized. In vitro corneal permeation across isolated rabbit cornea of aesculin between AG and aesculin solution (AS) was compared. The results showed that deacetylated gellan gum promotes corneal penetration of aesculin. Pharmacokinetics and ocular tissue distribution of aesculin after topical administration in rabbit eye showed that AG greatly improved aesculin accumulation in posterior segmentsrelative to AS, which was probably attributed to conjunctivital/sclera pathway. The area-under-the-curve (AUC) for AG in aqueous humor, choroid-retina, sclera and iris-ciliary body were significantly larger than those of AS. AG can be used as a potential carrier for broading the application of aesculin.
CKS knee prosthesis: biomechanics and clinical results in 42 cases.
Martucci, E; Verni, E; Del Prete, G; Stulberg, S D
1996-01-01
From 1991 to 1993 a total of 42 CKS prostheses were implanted for the following reasons: osteoarthrosis (34 cases), rheumatoid arthritis (7 cases) tibial necrosis (1 case). At follow-up obtained after 17 to 41 months the results were: excellent or good: 41; the only poor result was probably related to excessive tension of the posterior cruciate ligament. 94% of the patients reported complete regression of pain, 85% was capable of going up and down stairs without support. Mean joint flexion was 105 degrees. Radiologically the anatomical axis of the knee had a mean valgus of anatomical axis of the knee had a mean valgus of 6 degrees. The prosthetic components were always cemented. The posterior cruciate ligament was removed in 7 knees, so that the prosthesis with "posterior stability" was used. The patella was never prosthetized. One patient complained of peri-patellar pain two months after surgery which then regressed completely.
Hierarchical Bayes approach for subgroup analysis.
Hsu, Yu-Yi; Zalkikar, Jyoti; Tiwari, Ram C
2017-01-01
In clinical data analysis, both treatment effect estimation and consistency assessment are important for a better understanding of the drug efficacy for the benefit of subjects in individual subgroups. The linear mixed-effects model has been used for subgroup analysis to describe treatment differences among subgroups with great flexibility. The hierarchical Bayes approach has been applied to linear mixed-effects model to derive the posterior distributions of overall and subgroup treatment effects. In this article, we discuss the prior selection for variance components in hierarchical Bayes, estimation and decision making of the overall treatment effect, as well as consistency assessment of the treatment effects across the subgroups based on the posterior predictive p-value. Decision procedures are suggested using either the posterior probability or the Bayes factor. These decision procedures and their properties are illustrated using a simulated example with normally distributed response and repeated measurements.
Kuang, Ling-hao; Xu, Dong; Sun, Ya-wei; Cong, Jie; Tian, Ji-wei; Wang, Lei
2010-09-21
To study the clinical effect of anterior cervical approach surgery to removal posterior longitudinal ligament (PLL) with posterior longitudinal ligament hook pliers and posterior longitudinal ligament nip pliers. To retrospectively analyzed anterior cervical approach surgery treatment 73 patients who were cervical spondylosis myelopathy. All patients removal PLL with self-make instrument, According to JOA grade to evaluate effect of operations. Full patients removal PLL were in succeed, in shape of extradural has renew, the JOA grade were increase, (12.8 ± 3.2) vs (8.3 ± 1.9). Removal PLL were increase effect of downright decompress in anterior cervical approach surgery, Operations become safety agile and reduce the complications with self-make instrument.
Serfling, Robert; Ogola, Gerald
2016-02-10
Among men, prostate cancer (CaP) is the most common newly diagnosed cancer and the second leading cause of death from cancer. A major issue of very large scale is avoiding both over-treatment and under-treatment of CaP cases. The central challenge is deciding clinical significance or insignificance when the CaP biopsy results are positive but only marginally so. A related concern is deciding how to increase the number of biopsy cores for larger prostates. As a foundation for improved choice of number of cores and improved interpretation of biopsy results, we develop a probability model for the number of positive cores found in a biopsy, given the total number of cores, the volumes of the tumor nodules, and - very importantly - the prostate volume. Also, three applications are carried out: guidelines for the number of cores as a function of prostate volume, decision rules for insignificant versus significant CaP using number of positive cores, and, using prior distributions on total tumor size, Bayesian posterior probabilities for insignificant CaP and posterior median CaP. The model-based results have generality of application, take prostate volume into account, and provide attractive tradeoffs of specificity versus sensitivity. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Robust Bayesian Experimental Design for Conceptual Model Discrimination
NASA Astrophysics Data System (ADS)
Pham, H. V.; Tsai, F. T. C.
2015-12-01
A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.
Neural substrates of the impaired effort expenditure decision making in schizophrenia.
Huang, Jia; Yang, Xin-Hua; Lan, Yong; Zhu, Cui-Ying; Liu, Xiao-Qun; Wang, Ye-Fei; Cheung, Eric F C; Xie, Guang-Rong; Chan, Raymond C K
2016-09-01
Unwillingness to expend more effort to pursue high value rewards has been associated with motivational anhedonia in schizophrenia (SCZ) and abnormal dopamine activity in the nucleus accumbens (NAcc). The authors hypothesized that dysfunction of the NAcc and the associated forebrain regions are involved in the impaired effort expenditure decision-making of SCZ. A 2 (reward magnitude: low vs. high) × 3 (probability: 20% vs. 50% vs. 80%) event-related fMRI design in the effort-expenditure for reward task (EEfRT) was used to examine the neural response of 23 SCZ patients and 23 demographically matched control participants when the participants made effort expenditure decisions to pursue uncertain rewards. SCZ patients were significantly less likely to expend high level of effort in the medium (50%) and high (80%) probability conditions than healthy controls. The neural response in the NAcc, the posterior cingulate gyrus and the left medial frontal gyrus in SCZ patients were weaker than healthy controls and did not linearly increase with an increase in reward magnitude and probability. Moreover, NAcc activity was positively correlated with the willingness to expend high-level effort and concrete consummatory pleasure experience. NAcc and posterior cingulate dysfunctions in SCZ patients may be involved in their impaired effort expenditure decision-making. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Monteiro, Emiliano C; Tamaki, Fábio K; Terra, Walter R; Ribeiro, Alberto F
2014-03-01
This work presents a detailed morphofunctional study of the digestive system of a phasmid representative, Cladomorphus phyllinus. Cells from anterior midgut exhibit a merocrine secretion, whereas posterior midgut cells show a microapocrine secretion. A complex system of midgut tubules is observed in the posterior midgut which is probably related to the luminal alkalization of this region. Amaranth dye injection into the haemolymph and orally feeding insects with dye indicated that the anterior midgut is water-absorbing, whereas the Malpighian tubules are the main site of water secretion. Thus, a putative counter-current flux of fluid from posterior to anterior midgut may propel enzyme digestive recycling, confirmed by the low rate of enzyme excretion. The foregut and anterior midgut present an acidic pH (5.3 and 5.6, respectively), whereas the posterior midgut is highly alkaline (9.1) which may be related to the digestion of hemicelluloses. Most amylase, trypsin and chymotrypsin activities occur in the foregut and anterior midgut. Maltase is found along the midgut associated with the microvillar glycocalix, while aminopeptidase occurs in the middle and posterior midgut in membrane bound forms. Both amylase and trypsin are secreted mainly by the anterior midgut through an exocytic process as revealed by immunocytochemical data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cam impingement of the posterior femoral condyle in medial meniscal tears.
Suganuma, Jun; Mochizuki, Ryuta; Yamaguchi, Kenji; Inoue, Yutaka; Yamabe, Eikou; Ueda, Yoshiyuki; Fujinaka, Tarou
2010-02-01
The aim of this study was to compare the results of meniscal repair of the medial meniscus with or without decompression of the posterior segment of the medial meniscus for the treatment of posteromedial tibiofemoral incongruence at full flexion (PMTFI), which induces deformation of the posterior segment on sagittal magnetic resonance imaging (MRI). For more than 2 years, we followed up 27 patients with PMTFI who were classified into the following 2 groups. Group 1 included 8 patients (5 male joints and 3 female joints) with a medial meniscal tear with instability at the site of the tear who underwent meniscal repair. The mean age was 23.6 years. Group 2 included 19 patients (16 male joints and 3 female joints) who had a meniscal tear with instability at the site of the tear and underwent meniscal repair and decompression. The mean age was 26.5 years. In decompression of the posterior segment, redundant bone tissue on the most proximal part of the medial femoral condyle was excised. The patients were assessed by use of the Lysholm score, sagittal MRI at full flexion, and arthroscopic examination. There were no statistical differences in mean Lysholm score between the 2 groups before surgery, but the mean score in group 2 was significantly higher than that in group 1 after surgery. Meniscal deformation of the posterior segment at full flexion on MRI disappeared in all cases after decompression. On second-look arthroscopy, the rate of complete healing at the site of the tear was 0% in group 1 but 57% in group 2, and it was significantly different between these groups. The addition of decompression of the posterior segment of the medial meniscus to meniscal repair of knee joints with PMTFI allowed more room for the medial meniscus to accommodate and improved both function of the knee joint and the rate of success of repair of isolated medial meniscal tears in patients who regularly performed full knee flexion. (c) 2010 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Spalled, aerodynamically modified moldavite from Slavice, Moravia, Czechoslovakia
Chao, E.C.T.
1964-01-01
A Czechoslovakian tektite or moldavite shows clear, indirect evidence of aerodynamic ablation. This large tektite has the shape of a teardrop, with a strongly convex, deeply corroded, but clearly identifiable front and a planoconvex, relatively smooth, posterior surface. In spite of much erosion and corrosion, demarcation of the posterior and the anterior part of the specimen (the keel) is clearly preserved locally. This specimen provides the first tangible evidence that moldavites entered the atmosphere cold, probably at a velocity exceeding 5 kilometers per second; the result was selective heating of the anterior face and perhaps ablation during the second melting. This provides evidence of the extraterrestial origin of moldavites.
Some Simple Formulas for Posterior Convergence Rates
2014-01-01
We derive some simple relations that demonstrate how the posterior convergence rate is related to two driving factors: a “penalized divergence” of the prior, which measures the ability of the prior distribution to propose a nonnegligible set of working models to approximate the true model and a “norm complexity” of the prior, which measures the complexity of the prior support, weighted by the prior probability masses. These formulas are explicit and involve no essential assumptions and are easy to apply. We apply this approach to the case with model averaging and derive some useful oracle inequalities that can optimize the performance adaptively without knowing the true model. PMID:27379278
A new prior for bayesian anomaly detection: application to biosurveillance.
Shen, Y; Cooper, G F
2010-01-01
Bayesian anomaly detection computes posterior probabilities of anomalous events by combining prior beliefs and evidence from data. However, the specification of prior probabilities can be challenging. This paper describes a Bayesian prior in the context of disease outbreak detection. The goal is to provide a meaningful, easy-to-use prior that yields a posterior probability of an outbreak that performs at least as well as a standard frequentist approach. If this goal is achieved, the resulting posterior could be usefully incorporated into a decision analysis about how to act in light of a possible disease outbreak. This paper describes a Bayesian method for anomaly detection that combines learning from data with a semi-informative prior probability over patterns of anomalous events. A univariate version of the algorithm is presented here for ease of illustration of the essential ideas. The paper describes the algorithm in the context of disease-outbreak detection, but it is general and can be used in other anomaly detection applications. For this application, the semi-informative prior specifies that an increased count over baseline is expected for the variable being monitored, such as the number of respiratory chief complaints per day at a given emergency department. The semi-informative prior is derived based on the baseline prior, which is estimated from using historical data. The evaluation reported here used semi-synthetic data to evaluate the detection performance of the proposed Bayesian method and a control chart method, which is a standard frequentist algorithm that is closest to the Bayesian method in terms of the type of data it uses. The disease-outbreak detection performance of the Bayesian method was statistically significantly better than that of the control chart method when proper baseline periods were used to estimate the baseline behavior to avoid seasonal effects. When using longer baseline periods, the Bayesian method performed as well as the control chart method. The time complexity of the Bayesian algorithm is linear in the number of the observed events being monitored, due to a novel, closed-form derivation that is introduced in the paper. This paper introduces a novel prior probability for Bayesian outbreak detection that is expressive, easy-to-apply, computationally efficient, and performs as well or better than a standard frequentist method.
Value of Weather Information in Cranberry Marketing Decisions.
NASA Astrophysics Data System (ADS)
Morzuch, Bernard J.; Willis, Cleve E.
1982-04-01
Econometric techniques are used to establish a functional relationship between cranberry yields and important precipitation, temperature, and sunshine variables. Crop forecasts are derived from the model and are used to establish posterior probabilities to be used in a Bayesian decision context pertaining to leasing space for the storage of the berries.
Bayesian Estimation of the DINA Model with Gibbs Sampling
ERIC Educational Resources Information Center
Culpepper, Steven Andrew
2015-01-01
A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…
Bayesian Posterior Odds Ratios: Statistical Tools for Collaborative Evaluations
ERIC Educational Resources Information Center
Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon
2018-01-01
To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…
Full-thickness tears of the supraspinatus tendon: A three-dimensional finite element analysis.
Quental, C; Folgado, J; Monteiro, J; Sarmento, M
2016-12-08
Knowledge regarding the likelihood of propagation of supraspinatus tears is important to allow an early identification of patients for whom a conservative treatment is more likely to fail, and consequently, to improve their clinical outcome. The aim of this study was to investigate the potential for propagation of posterior, central, and anterior full-thickness tears of different sizes using the finite element method. A three-dimensional finite element model of the supraspinatus tendon was generated from the Visible Human Project data. The mechanical behaviour of the tendon was fitted from experimental data using a transversely isotropic hyperelastic constitutive model. The full-thickness tears were simulated at the supraspinatus tendon insertion by decreasing the interface area. Tear sizes from 10% to 90%, in 10% increments, of the anteroposterior length of the supraspinatus footprint were considered in the posterior, central, and anterior regions of the tendon. For each tear, three finite element analyses were performed for a supraspinatus force of 100N, 200N, and 400N. Considering a correlation between tendon strain and the risk of tear propagation, the simulated tears were compared qualitatively and quantitatively by evaluating the volume of tendon for which a maximum strain criterion was not satisfied. The finite element analyses showed a significant impact of tear size and location not only on the magnitude, but also on the patterns of the maximum principal strains. The mechanical outcome of the anterior full-thickness tears was consistently, and significantly, more severe than that of the central or posterior full-thickness tears, which suggests that the anterior tears are at greater risk of propagating than the central or posterior tears. Copyright © 2016 Elsevier Ltd. All rights reserved.
[Posterior ceramic bonded partial restorations].
Mainjot, Amélie; Vanheusden, Alain
2006-01-01
Posterior ceramic bonded partial restorations are conservative and esthetic approaches for compromised teeth. Overlays constitute a less invasive alternative for tooth tissues than crown preparations. With inlays and onlays they are also indicated in case of full arch or quadrant rehabilitations including several teeth. This article screens indications and realization of this type of restorations.
Development of a Random Field Model for Gas Plume Detection in Multiple LWIR Images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heasler, Patrick G.
This report develops a random field model that describes gas plumes in LWIR remote sensing images. The random field model serves as a prior distribution that can be combined with LWIR data to produce a posterior that determines the probability that a gas plume exists in the scene and also maps the most probable location of any plume. The random field model is intended to work with a single pixel regression estimator--a regression model that estimates gas concentration on an individual pixel basis.
Ockham's razor and Bayesian analysis. [statistical theory for systems evaluation
NASA Technical Reports Server (NTRS)
Jefferys, William H.; Berger, James O.
1992-01-01
'Ockham's razor', the ad hoc principle enjoining the greatest possible simplicity in theoretical explanations, is presently shown to be justifiable as a consequence of Bayesian inference; Bayesian analysis can, moreover, clarify the nature of the 'simplest' hypothesis consistent with the given data. By choosing the prior probabilities of hypotheses, it becomes possible to quantify the scientific judgment that simpler hypotheses are more likely to be correct. Bayesian analysis also shows that a hypothesis with fewer adjustable parameters intrinsically possesses an enhanced posterior probability, due to the clarity of its predictions.
Graves, Stephen; Sedrakyan, Art; Baste, Valborg; Gioe, Terence J; Namba, Robert; Martínez Cruz, Olga; Stea, Susanna; Paxton, Elizabeth; Banerjee, Samprit; Isaacs, Abby J; Robertsson, Otto
2014-12-17
Posterior-stabilized total knee prostheses were introduced to address instability secondary to loss of posterior cruciate ligament function, and they have either fixed or mobile bearings. Mobile bearings were developed to improve the function and longevity of total knee prostheses. In this study, the International Consortium of Orthopaedic Registries used a distributed health data network to study a large cohort of posterior-stabilized prostheses to determine if the outcome of a posterior-stabilized total knee prosthesis differs depending on whether it has a fixed or mobile-bearing design. Aggregated registry data were collected with a distributed health data network that was developed by the International Consortium of Orthopaedic Registries to reduce barriers to participation (e.g., security, proprietary, legal, and privacy issues) that have the potential to occur with the alternate centralized data warehouse approach. A distributed health data network is a decentralized model that allows secure storage and analysis of data from different registries. Each registry provided data on mobile and fixed-bearing posterior-stabilized prostheses implanted between 2001 and 2010. Only prostheses associated with primary total knee arthroplasties performed for the treatment of osteoarthritis were included. Prostheses with all types of fixation were included except for those with the rarely used reverse hybrid (cementless tibial and cemented femoral components) fixation. The use of patellar resurfacing was reported. The outcome of interest was time to first revision (for any reason). Multivariate meta-analysis was performed with linear mixed models with survival probability as the unit of analysis. This study includes 137,616 posterior-stabilized knee prostheses; 62% were in female patients, and 17.6% had a mobile bearing. The results of the fixed-effects model indicate that in the first year the mobile-bearing posterior-stabilized prostheses had a significantly higher hazard ratio (1.86) than did the fixed-bearing posterior-stabilized prostheses (95% confidence interval, 1.28 to 2.7; p = 0.001). For all other time intervals, the mobile-bearing posterior-stabilized prostheses had higher hazard ratios; however, these differences were not significant. Mobile-bearing posterior-stabilized prostheses had an increased rate of revision compared with fixed-bearing posterior-stabilized prostheses. This difference was evident in the first year. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
Augusto, Kathiane Lustosa; Bezerra, Leonardo Robson Pinheiro Sobreira; Murad-Regadas, Sthela Maria; Vasconcelos Neto, José Ananias; Vasconcelos, Camila Teixeira Moreira; Karbage, Sara Arcanjo Lino; Bilhar, Andreisa Paiva Monteiro; Regadas, Francisco Sérgio Pinheiro
2017-07-01
Pelvic Floor Dysfunction is a complex condition that may be asymptomatic or may involve a loto f symptoms. This study evaluates defecatory dysfunction, fecal incontinence, and quality of life in relation to presence of posterior vaginal prolapse. 265 patients were divided into two groups according to posterior POP-Q stage: posterior POP-Q stage ≥2 and posterior POP-Q stage <2. The two groups were compared regarding demographic and clinical data; overall POP-Q stage, percentage of patients with defecatory dysfunction, percentage of patients with fecal incontinence, pelvic floor muscle strength, and quality of life scores. The correlation between severity of the prolapse and severity of constipation was calculated using ρ de Spearman (rho). Women with Bp stage ≥2 were significantly older and had significantly higher BMI, numbers of pregnancies and births, and overall POP-Q stage than women with stage <2. No significant differences between the groups were observed regarding proportion of patients with defecatory dysfunction or incontinence, pelvic floor muscle strength, quality of life (ICIQ-SF), or sexual impact (PISQ-12). POP-Q stage did not correlate with severity of constipation and incontinence. General quality of life perception on the SF-36 was significantly worse in patients with POP-Q stage ≥2 than in those with POP-Q stage <2. The lack of a clinically important association between the presence of posterior vaginal prolapse and symptoms of constipation or anal incontinence leads us to agree with the conclusion that posterior vaginal prolapse probably is not an independent cause defecatory dysfunction or fecal incontinence. Copyright © 2017 Elsevier B.V. All rights reserved.
Bayesian Retrieval of Complete Posterior PDFs of Oceanic Rain Rate From Microwave Observations
NASA Technical Reports Server (NTRS)
Chiu, J. Christine; Petty, Grant W.
2005-01-01
This paper presents a new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measurements Mission (TRMM) Microwave Imager (TMI) over the ocean, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes Theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance our understanding of theoretical benefits of the Bayesian approach, we have conducted sensitivity analyses based on two synthetic datasets for which the true conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak, due to saturation effects. It is also suggested that the choice of the estimators and the prior information are both crucial to the retrieval. In addition, the performance of our Bayesian algorithm is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.
ERIC Educational Resources Information Center
Zwick, Rebecca; Lenaburg, Lubella
2009-01-01
In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…
New KF-PP-SVM classification method for EEG in brain-computer interfaces.
Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian
2014-01-01
Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.
NASA Astrophysics Data System (ADS)
Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming
2013-03-01
The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.
Safari, Parviz; Danyali, Syyedeh Fatemeh; Rahimi, Mehdi
2018-06-02
Drought is the main abiotic stress seriously influencing wheat production. Information about the inheritance of drought tolerance is necessary to determine the most appropriate strategy to develop tolerant cultivars and populations. In this study, generation means analysis to identify the genetic effects controlling grain yield inheritance in water deficit and normal conditions was considered as a model selection problem in a Bayesian framework. Stochastic search variable selection (SSVS) was applied to identify the most important genetic effects and the best fitted models using different generations obtained from two crosses applying two water regimes in two growing seasons. The SSVS is used to evaluate the effect of each variable on the dependent variable via posterior variable inclusion probabilities. The model with the highest posterior probability is selected as the best model. In this study, the grain yield was controlled by the main effects (additive and non-additive effects) and epistatic. The results demonstrate that breeding methods such as recurrent selection and subsequent pedigree method and hybrid production can be useful to improve grain yield.
Data analysis in emission tomography using emission-count posteriors
NASA Astrophysics Data System (ADS)
Sitek, Arkadiusz
2012-11-01
A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.
Alveolar ridge expansion-assisted orthodontic space closure in the mandibular posterior region.
Ozer, Mete; Akdeniz, Berat Serdar; Sumer, Mahmut
2013-12-01
Orthodontic closure of old, edentulous spaces in the mandibular posterior region is a major challenge. In this report, we describe a method of orthodontic closure of edentulous spaces in the mandibular posterior region accelerated by piezoelectric decortication and alveolar ridge expansion. Combined piezosurgical and orthodontic treatments were used to close 14- and 15-mm-wide spaces in the mandibular left and right posterior areas, respectively, of a female patient, aged 18 years and 9 months, diagnosed with skeletal Class III malocclusion, hypodontia, and polydiastemas. After the piezoelectric decortication, segmental and full-arch mechanics were applied in the orthodontic phase. Despite some extent of root resorption and anchorage loss, the edentulous spaces were closed, and adequate function and esthetics were regained without further restorative treatment. Alveolar ridge expansion-assisted orthodontic space closure seems to be an effective and relatively less-invasive treatment alternative for edentulous spaces in the mandibular posterior region.
Alveolar ridge expansion-assisted orthodontic space closure in the mandibular posterior region
Akdeniz, Berat Serdar; Sumer, Mahmut
2013-01-01
Orthodontic closure of old, edentulous spaces in the mandibular posterior region is a major challenge. In this report, we describe a method of orthodontic closure of edentulous spaces in the mandibular posterior region accelerated by piezoelectric decortication and alveolar ridge expansion. Combined piezosurgical and orthodontic treatments were used to close 14- and 15-mm-wide spaces in the mandibular left and right posterior areas, respectively, of a female patient, aged 18 years and 9 months, diagnosed with skeletal Class III malocclusion, hypodontia, and polydiastemas. After the piezoelectric decortication, segmental and full-arch mechanics were applied in the orthodontic phase. Despite some extent of root resorption and anchorage loss, the edentulous spaces were closed, and adequate function and esthetics were regained without further restorative treatment. Alveolar ridge expansion-assisted orthodontic space closure seems to be an effective and relatively less-invasive treatment alternative for edentulous spaces in the mandibular posterior region. PMID:24396740
Gaussianization for fast and accurate inference from cosmological data
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2016-06-01
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
General Metropolis-Hastings jump diffusions for automatic target recognition in infrared scenes
NASA Astrophysics Data System (ADS)
Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.
1997-04-01
To locate and recognize ground-based targets in forward- looking IR (FLIR) images, 3D faceted models with associated pose parameters are formulated to accommodate the variability found in FLIR imagery. Taking a Bayesian approach, scenes are simulated from the emissive characteristics of the CAD models and compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. To accommodate scenes with variable numbers of targets, the posterior distribution is defined over parameter vectors of varying dimension. An inference algorithm based on Metropolis-Hastings jump- diffusion processes empirically samples from the posterior distribution, generating configurations of templates and transformations that match the collected sensor data with high probability. The jumps accommodate the addition and deletion of targets and the estimation of target identities; diffusions refine the hypotheses by drifting along the gradient of the posterior distribution with respect to the orientation and position parameters. Previous results on jumps strategies analogous to the Metropolis acceptance/rejection algorithm, with proposals drawn from the prior and accepted based on the likelihood, are extended to encompass general Metropolis-Hastings proposal densities. In particular, the algorithm proposes moves by drawing from the posterior distribution over computationally tractible subsets of the parameter space. The algorithm is illustrated by an implementation on a Silicon Graphics Onyx/Reality Engine.
Reidenbach, M M
1995-01-01
The posterior cricothyroid ligament and its topographic relation to the inferior laryngeal nerve were studied in 54 human adult male and female larynges. Fourteen specimens were impregnated with curable polymers and cut into 600-800 microns sections along different planes. Forty formalin-fixed hemi-larynges were dissected and various measurements were made. The posterior cricothyroid ligament provides a dorsal strengthening for the joint capsule of the cricothyroid joint. Its fibers spread in a fan-like manner from a small area of origin at the cricoid cartilage to a more extended area of attachment at the inferior thyroid cornu. The ligament consists of one (7.5%) to four (12.5%), in most cases of three (45.0%) or two (35.0%), individual parts oriented from mediocranial to latero-caudal. The inferior laryngeal nerve courses immediately dorsal to the ligament. In 60% it is covered by fibers of the posterior cricoarytenoid muscle, in the remaining 40% it is not. In this latter topographic situation there is almost no soft tissue interposed between the nerve and the hypopharynx. Therefore, the nerve may be exposed to pressure forces exerted from dorsally. It may be pushed against the unyielding posterior cricothyroid ligament and suffer functional or structural impairment. Probably, this mechanism may explain some of the laryngeal nerve lesions described in the literature after insertion of gastric tubes.
DETECTING UNSPECIFIED STRUCTURE IN LOW-COUNT IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Nathan M.; Dyk, David A. van; Kashyap, Vinay L.
Unexpected structure in images of astronomical sources often presents itself upon visual inspection of the image, but such apparent structure may either correspond to true features in the source or be due to noise in the data. This paper presents a method for testing whether inferred structure in an image with Poisson noise represents a significant departure from a baseline (null) model of the image. To infer image structure, we conduct a Bayesian analysis of a full model that uses a multiscale component to allow flexible departures from the posited null model. As a test statistic, we use a tailmore » probability of the posterior distribution under the full model. This choice of test statistic allows us to estimate a computationally efficient upper bound on a p-value that enables us to draw strong conclusions even when there are limited computational resources that can be devoted to simulations under the null model. We demonstrate the statistical performance of our method on simulated images. Applying our method to an X-ray image of the quasar 0730+257, we find significant evidence against the null model of a single point source and uniform background, lending support to the claim of an X-ray jet.« less
Posterior labral injury in contact athletes.
Mair, S D; Zarzour, R H; Speer, K P
1998-01-01
Nine athletes (seven football offensive linemen, one defensive lineman, and one lacrosse player) were found at arthroscopy to have posterior labral detachment from the glenoid. In our series, this lesion is specific to contact athletes who engage their opponents with arms in front of the body. All patients had pain with bench pressing and while participating in their sport, diminishing their ability to play effectively. Conservative measures were ineffective in relieving their symptoms. Examination under anesthesia revealed symmetric glenohumeral translation bilaterally, without evidence of posterior instability. Treatment consisted of glenoid rim abradement and posterior labral repair with a bioabsorbable tack. All patients returned to complete at least one full season of contact sports and weightlifting without pain (minimum follow-up, > or = 2 years). Although many injuries leading to subluxation of the glenohumeral joint occur when an unanticipated force is applied, contact athletes ready their shoulder muscles in anticipation of impact with opponents. This leads to a compressive force at the glenohumeral joint. We hypothesize that, in combination with a posteriorly directed force at impact, the resultant vector is a shearing force to the posterior labrum and articular surface. Repeated exposure leads to posterior labral detachment without capsular injury. Posterior labral reattachment provides consistently good results, allowing the athlete to return to competition.
Wu, Shih-Wei; Delgado, Mauricio R.; Maloney, Laurence T.
2011-01-01
In decision under risk, people choose between lotteries that contain a list of potential outcomes paired with their probabilities of occurrence. We previously developed a method for translating such lotteries to mathematically equivalent motor lotteries. The probability of each outcome in a motor lottery is determined by the subject’s noise in executing a movement. In this study, we used functional magnetic resonance imaging in humans to compare the neural correlates of monetary outcome and probability in classical lottery tasks where information about probability was explicitly communicated to the subjects and in mathematically equivalent motor lottery tasks where probability was implicit in the subjects’ own motor noise. We found that activity in the medial prefrontal cortex (mPFC) and the posterior cingulate cortex (PCC) quantitatively represent the subjective utility of monetary outcome in both tasks. For probability, we found that the mPFC significantly tracked the distortion of such information in both tasks. Specifically, activity in mPFC represents probability information but not the physical properties of the stimuli correlated with this information. Together, the results demonstrate that mPFC represents probability from two distinct forms of decision under risk. PMID:21677166
Wu, Shih-Wei; Delgado, Mauricio R; Maloney, Laurence T
2011-06-15
In decision under risk, people choose between lotteries that contain a list of potential outcomes paired with their probabilities of occurrence. We previously developed a method for translating such lotteries to mathematically equivalent "motor lotteries." The probability of each outcome in a motor lottery is determined by the subject's noise in executing a movement. In this study, we used functional magnetic resonance imaging in humans to compare the neural correlates of monetary outcome and probability in classical lottery tasks in which information about probability was explicitly communicated to the subjects and in mathematically equivalent motor lottery tasks in which probability was implicit in the subjects' own motor noise. We found that activity in the medial prefrontal cortex (mPFC) and the posterior cingulate cortex quantitatively represent the subjective utility of monetary outcome in both tasks. For probability, we found that the mPFC significantly tracked the distortion of such information in both tasks. Specifically, activity in mPFC represents probability information but not the physical properties of the stimuli correlated with this information. Together, the results demonstrate that mPFC represents probability from two distinct forms of decision under risk.
An Empirical Bayes Approach to Spatial Analysis
NASA Technical Reports Server (NTRS)
Morris, C. N.; Kostal, H.
1983-01-01
Multi-channel LANDSAT data are collected in several passes over agricultural areas during the growing season. How empirical Bayes modeling can be used to develop crop identification and discrimination techniques that account for spatial correlation in such data is considered. The approach models the unobservable parameters and the data separately, hoping to take advantage of the fact that the bulk of spatial correlation lies in the parameter process. The problem is then framed in terms of estimating posterior probabilities of crop types for each spatial area. Some empirical Bayes spatial estimation methods are used to estimate the logits of these probabilities.
NASA Astrophysics Data System (ADS)
Linde, N.; Vrugt, J. A.
2009-04-01
Geophysical models are increasingly used in hydrological simulations and inversions, where they are typically treated as an artificial data source with known uncorrelated "data errors". The model appraisal problem in classical deterministic linear and non-linear inversion approaches based on linearization is often addressed by calculating model resolution and model covariance matrices. These measures offer only a limited potential to assign a more appropriate "data covariance matrix" for future hydrological applications, simply because the regularization operators used to construct a stable inverse solution bear a strong imprint on such estimates and because the non-linearity of the geophysical inverse problem is not explored. We present a parallelized Markov Chain Monte Carlo (MCMC) scheme to efficiently derive the posterior spatially distributed radar slowness and water content between boreholes given first-arrival traveltimes. This method is called DiffeRential Evolution Adaptive Metropolis (DREAM_ZS) with snooker updater and sampling from past states. Our inverse scheme does not impose any smoothness on the final solution, and uses uniform prior ranges of the parameters. The posterior distribution of radar slowness is converted into spatially distributed soil moisture values using a petrophysical relationship. To benchmark the performance of DREAM_ZS, we first apply our inverse method to a synthetic two-dimensional infiltration experiment using 9421 traveltimes contaminated with Gaussian errors and 80 different model parameters, corresponding to a model discretization of 0.3 m × 0.3 m. After this, the method is applied to field data acquired in the vadose zone during snowmelt. This work demonstrates that fully non-linear stochastic inversion can be applied with few limiting assumptions to a range of common two-dimensional tomographic geophysical problems. The main advantage of DREAM_ZS is that it provides a full view of the posterior distribution of spatially distributed soil moisture, which is key to appropriately treat geophysical parameter uncertainty and infer hydrologic models.
To P or Not to P: Backing Bayesian Statistics.
Buchinsky, Farrel J; Chadha, Neil K
2017-12-01
In biomedical research, it is imperative to differentiate chance variation from truth before we generalize what we see in a sample of subjects to the wider population. For decades, we have relied on null hypothesis significance testing, where we calculate P values for our data to decide whether to reject a null hypothesis. This methodology is subject to substantial misinterpretation and errant conclusions. Instead of working backward by calculating the probability of our data if the null hypothesis were true, Bayesian statistics allow us instead to work forward, calculating the probability of our hypothesis given the available data. This methodology gives us a mathematical means of incorporating our "prior probabilities" from previous study data (if any) to produce new "posterior probabilities." Bayesian statistics tell us how confidently we should believe what we believe. It is time to embrace and encourage their use in our otolaryngology research.
NASA Astrophysics Data System (ADS)
McNabb, Ryan P.; Viehland, Christian; Keller, Brenton; Vann, Robin R.; Izatt, Joseph A.; Kuo, Anthony N.
2017-02-01
Optical coherence tomography (OCT) has revolutionized clinical observation of the eye and is an indispensable part of the modern ophthalmic practice. Unlike many other ophthalmic imaging techniques, OCT provides three-dimensional information about the imaged eye. However, conventional clinical OCT systems image only the anterior or the posterior eye during a single acquisition. Newer OCT systems have begun to image both during the same acquisition but with compromises such as limited field of view in the posterior eye or requiring rapid switching between the anterior and posterior eye during the scan. We describe here the development and demonstration of an OCT system with truly simultaneous imaging of both the anterior and posterior eye capable of imaging the full anterior chamber width and 50° on the retina (macula, optic nerve, and arcades). The whole eye OCT system was developed using custom optics and optomechanics. Polarization was utilized to separate the imaging channels. We utilized a 200kHz swept-source laser (Axsun Technologies) centered at 1040±50nm of bandwidth. The clock signal generated by the laser was interpolated 4x to generate 5504 samples per laser sweep. With the whole eye OCT system, we simultaneously acquired anterior and posterior segments with repeated B-scans as well as three-dimensional volumes from seven healthy volunteers (other than refractive error). On three of these volunteers, whole eye OCT and partial coherence interferometry (LenStar PCI, Haag-Streit) were used to measure axial eye length. We measured a mean repeatability of ±47µm with whole eye OCT and a mean difference from PCI of -68µm.
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
Intervals for posttest probabilities: a comparison of 5 methods.
Mossman, D; Berger, J O
2001-01-01
Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.
[Isolated severe neurologic disorders in post-partum: posterior reversible encephalopathy syndrome].
Wernet, A; Benayoun, L; Yver, C; Bruno, O; Mantz, J
2007-01-01
Just after Caesarean section for twin pregnancy and feto-pelvic dysproportion, a woman presented severe headaches and arterial hypertension, then blurred vision, then generalised seizures. There were no oedematous syndrome, proteinuria was negative, ASAT were 1.5 N and platelet count was 120,000/mm(3). Cerebral CT-scan was normal. Posterior reversible encephalopathy syndrome (PRES) was diagnosed on MRI. A second MRI performed at day 9 showed complete regression of cerebral lesions, while patient was taking anti-hypertensive and antiepileptic drugs. PRES has to be evoked in post-partum central neurological symptoms, even in absence of classical sign of pre-eclampsia, like proteinuria. PRES and eclampsia share probably common physiopathological pathways. There management and prognosis seems identical.
Bayesian inference of physiologically meaningful parameters from body sway measurements.
Tietäväinen, A; Gutmann, M U; Keski-Vakkuri, E; Corander, J; Hæggström, E
2017-06-19
The control of the human body sway by the central nervous system, muscles, and conscious brain is of interest since body sway carries information about the physiological status of a person. Several models have been proposed to describe body sway in an upright standing position, however, due to the statistical intractability of the more realistic models, no formal parameter inference has previously been conducted and the expressive power of such models for real human subjects remains unknown. Using the latest advances in Bayesian statistical inference for intractable models, we fitted a nonlinear control model to posturographic measurements, and we showed that it can accurately predict the sway characteristics of both simulated and real subjects. Our method provides a full statistical characterization of the uncertainty related to all model parameters as quantified by posterior probability density functions, which is useful for comparisons across subjects and test settings. The ability to infer intractable control models from sensor data opens new possibilities for monitoring and predicting body status in health applications.
Inferring the post-merger gravitational wave emission from binary neutron star coalescences
NASA Astrophysics Data System (ADS)
Chatziioannou, Katerina; Clark, James Alexander; Bauswein, Andreas; Millhouse, Margaret; Littenberg, Tyson B.; Cornish, Neil
2017-12-01
We present a robust method to characterize the gravitational wave emission from the remnant of a neutron star coalescence. Our approach makes only minimal assumptions about the morphology of the signal and provides a full posterior probability distribution of the underlying waveform. We apply our method on simulated data from a network of advanced ground-based detectors and demonstrate the gravitational wave signal reconstruction. We study the reconstruction quality for different binary configurations and equations of state for the colliding neutron stars. We show how our method can be used to constrain the yet-uncertain equation of state of neutron star matter. The constraints on the equation of state we derive are complementary to measurements of the tidal deformation of the colliding neutron stars during the late inspiral phase. In the case of nondetection of a post-merger signal following a binary neutron star inspiral, we show that we can place upper limits on the energy emitted.
Cole, Shelley A; Voruganti, V Saroja; Cai, Guowen; Haack, Karin; Kent, Jack W; Blangero, John; Comuzzie, Anthony G; McPherson, John D; Gibbs, Richard A
2010-01-01
Background: Melanocortin-4-receptor (MC4R) haploinsufficiency is the most common form of monogenic obesity; however, the frequency of MC4R variants and their functional effects in general populations remain uncertain. Objective: The aim was to identify and characterize the effects of MC4R variants in Hispanic children. Design: MC4R was resequenced in 376 parents, and the identified single nucleotide polymorphisms (SNPs) were genotyped in 613 parents and 1016 children from the Viva la Familia cohort. Measured genotype analysis (MGA) tested associations between SNPs and phenotypes. Bayesian quantitative trait nucleotide (BQTN) analysis was used to infer the most likely functional polymorphisms influencing obesity-related traits. Results: Seven rare SNPs in coding and 18 SNPs in flanking regions of MC4R were identified. MGA showed suggestive associations between MC4R variants and body size, adiposity, glucose, insulin, leptin, ghrelin, energy expenditure, physical activity, and food intake. BQTN analysis identified SNP 1704 in a predicted micro-RNA target sequence in the downstream flanking region of MC4R as a strong, probable functional variant influencing total, sedentary, and moderate activities with posterior probabilities of 1.0. SNP 2132 was identified as a variant with a high probability (1.0) of exerting a functional effect on total energy expenditure and sleeping metabolic rate. SNP rs34114122 was selected as having likely functional effects on the appetite hormone ghrelin, with a posterior probability of 0.81. Conclusion: This comprehensive investigation provides strong evidence that MC4R genetic variants are likely to play a functional role in the regulation of weight, not only through energy intake but through energy expenditure. PMID:19889825
Onai, Takayuki; Lin, Hsiu-Chin; Schubert, Michael; Koop, Demian; Osborne, Peter W; Alvarez, Susana; Alvarez, Rosana; Holland, Nicholas D; Holland, Linda Z
2009-08-15
A role for Wnt/beta-catenin signaling in axial patterning has been demonstrated in animals as basal as cnidarians, while roles in axial patterning for retinoic acid (RA) probably evolved in the deuterostomes and may be chordate-specific. In vertebrates, these two pathways interact both directly and indirectly. To investigate the evolutionary origins of interactions between these two pathways, we manipulated Wnt/beta-catenin and RA signaling in the basal chordate amphioxus during the gastrula stage, which is the RA-sensitive period for anterior/posterior (A/P) patterning. The results show that Wnt/beta-catenin and RA signaling have distinctly different roles in patterning the A/P axis of the amphioxus gastrula. Wnt/beta-catenin specifies the identity of the ends of the embryo (high Wnt = posterior; low Wnt = anterior) but not intervening positions. Thus, upregulation of Wnt/beta-catenin signaling induces ectopic expression of posterior markers at the anterior tip of the embryo. In contrast, RA specifies position along the A/P axis, but not the identity of the ends of the embryo-increased RA signaling strongly affects the domains of Hox expression along the A/P axis but has little or no effect on the expression of either anterior or posterior markers. Although the two pathways may both influence such things as specification of neuronal identity, interactions between them in A/P patterning appear to be minimal.
O'Reilly, Joseph E; Donoghue, Philip C J
2018-03-01
Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data.
O’Reilly, Joseph E; Donoghue, Philip C J
2018-01-01
Abstract Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data. PMID:29106675
Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans
Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude
2013-01-01
Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies. PMID:24302894
Forecasts of non-Gaussian parameter spaces using Box-Cox transformations
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.
2011-09-01
Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.
Mollet, Pierre; Kery, Marc; Gardner, Beth; Pasinelli, Gilberto; Royle, Andy
2015-01-01
We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR) model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex). The SCR model yielded atotal population size estimate (posterior mean) of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130–147). The observed sex ratio was skewed towards males (0.63). The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54–0.61), suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less
Bayesian Travel Time Inversion adopting Gaussian Process Regression
NASA Astrophysics Data System (ADS)
Mauerberger, S.; Holschneider, M.
2017-12-01
A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.
Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D
2011-12-01
Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.
Characteristics of Chinese-English bilingual dyslexia in right occipito-temporal lesion.
Ting, Simon Kang Seng; Chia, Pei Shi; Chan, Yiong Huak; Kwek, Kevin Jun Hong; Tan, Wilnard; Hameed, Shahul; Tan, Eng-King
2017-11-01
Current literature suggests that right hemisphere lesions produce predominant spatial-related dyslexic error in English speakers. However, little is known regarding such lesions in Chinese speakers. In this paper, we describe the dyslexic characteristics of a Chinese-English bilingual patient with a right posterior cortical lesion. He was found to have profound spatial-related errors during his English word reading, in both real and non-words. During Chinese word reading, there was significantly less error compared to English, probably due to the ideographic nature of the Chinese language. He was also found to commit phonological-like visual errors in English, characterized by error responses that were visually similar to the actual word. There was no significant difference in visual errors during English word reading compared with Chinese. In general, our patient's performance in both languages appears to be consistent with the current literature on right posterior hemisphere lesions. Additionally, his performance also likely suggests that the right posterior cortical region participates in the visual analysis of orthographical word representation, both in ideographical and alphabetic languages, at least from a bilingual perspective. Future studies should further examine the role of the right posterior region in initial visual analysis of both languages. Copyright © 2017 Elsevier Ltd. All rights reserved.
Silicone intraocular lens surface calcification in a patient with asteroid hyalosis.
Matsumura, Kazuhiro; Takano, Masahiko; Shimizu, Kimiya; Nemoto, Noriko
2012-07-01
To confirm a substance presence on the posterior intraocular lens (IOL) surface in a patient with asteroid hyalosis. An 80-year-old man had IOLs for approximately 12 years. Opacities and neodymium-doped yttrium aluminum garnet pits were observed on the posterior surface of the right IOL. Asteroid hyalosis and an epiretinal membrane were observed OD. An IOL exchange was performed on 24 March 2008, and the explanted IOL was analyzed using a light microscope and a transmission electron microscope with a scanning electron micrograph and an energy-dispersive X-ray spectrometer for elemental analysis. To confirm asteroid hyalosis, asteroid bodies were examined with the ionic liquid (EtMeIm+ BF4-) method using a field emission scanning electron microscope (FE-SEM) with digital beam control RGB mapping. X-ray spectrometry of the deposits revealed high calcium and phosphorus peaks. Spectrometry revealed that the posterior IOL surface opacity was due to a calcium-phosphorus compound. Examination of the asteroid bodies using FE-SEM with digital beam control RGB mapping confirmed calcium and phosphorus as the main components. Calcium hydrogen phosphate dihydrate deposits were probably responsible for the posterior IOL surface opacity. Furthermore, analysis of the asteroid bodies demonstrated that calcium and phosphorus were its main components.
Failure analysis of various monolithic posterior aesthetic dental crowns using finite element method
NASA Astrophysics Data System (ADS)
Porojan, Liliana; Topală, Florin
2017-08-01
The aim of the study was to assess the effect of material stiffness and load on the biomechanical performance of the monolithic full-coverage posterior aesthetic dental crowns using finite element analysis. Three restorative materials for monolithic dental crowns were selected for the study: zirconia; lithium disilicate glass-ceramic, and resin-based composite. Stresses were calculated in the crowns for all materials and in the teeth structures, under different load values. The experiments show that dental crowns made from all this new aesthetic materials processed by CAD/CAM technologies would be indicated as monolithic dental crowns for posterior areas.
Preformed posterior stainless steel crowns: an update.
Croll, T P
1999-02-01
For almost 50 years, dentists have used stainless steel crowns for primary and permanent posterior teeth. No other type of restoration offers the convenience, low cost, durability, and reliability of such crowns when interim full-coronal coverage is required. Preformed stainless steel crowns have improved over the years. Better luting cements have been developed and different methods of crown manipulation have evolved. This article reviews stainless steel crown procedures for primary and permanent posterior teeth. Step-by-step placement of a primary molar stainless steel crown is documented and permanent molar stainless steel crown restoration is described. A method for repairing a worn-through crown also is reviewed.
A bayesian analysis for identifying DNA copy number variations using a compound poisson process.
Chen, Jie; Yiğiter, Ayten; Wang, Yu-Ping; Deng, Hong-Wen
2010-01-01
To study chromosomal aberrations that may lead to cancer formation or genetic diseases, the array-based Comparative Genomic Hybridization (aCGH) technique is often used for detecting DNA copy number variants (CNVs). Various methods have been developed for gaining CNVs information based on aCGH data. However, most of these methods make use of the log-intensity ratios in aCGH data without taking advantage of other information such as the DNA probe (e.g., biomarker) positions/distances contained in the data. Motivated by the specific features of aCGH data, we developed a novel method that takes into account the estimation of a change point or locus of the CNV in aCGH data with its associated biomarker position on the chromosome using a compound Poisson process. We used a Bayesian approach to derive the posterior probability for the estimation of the CNV locus. To detect loci of multiple CNVs in the data, a sliding window process combined with our derived Bayesian posterior probability was proposed. To evaluate the performance of the method in the estimation of the CNV locus, we first performed simulation studies. Finally, we applied our approach to real data from aCGH experiments, demonstrating its applicability.
Bayesian methods for outliers detection in GNSS time series
NASA Astrophysics Data System (ADS)
Qianqian, Zhang; Qingming, Gui
2013-07-01
This article is concerned with the problem of detecting outliers in GNSS time series based on Bayesian statistical theory. Firstly, a new model is proposed to simultaneously detect different types of outliers based on the conception of introducing different types of classification variables corresponding to the different types of outliers; the problem of outlier detection is converted into the computation of the corresponding posterior probabilities, and the algorithm for computing the posterior probabilities based on standard Gibbs sampler is designed. Secondly, we analyze the reasons of masking and swamping about detecting patches of additive outliers intensively; an unmasking Bayesian method for detecting additive outlier patches is proposed based on an adaptive Gibbs sampler. Thirdly, the correctness of the theories and methods proposed above is illustrated by simulated data and then by analyzing real GNSS observations, such as cycle slips detection in carrier phase data. Examples illustrate that the Bayesian methods for outliers detection in GNSS time series proposed by this paper are not only capable of detecting isolated outliers but also capable of detecting additive outlier patches. Furthermore, it can be successfully used to process cycle slips in phase data, which solves the problem of small cycle slips.
Probabilistic graphs as a conceptual and computational tool in hydrology and water management
NASA Astrophysics Data System (ADS)
Schoups, Gerrit
2014-05-01
Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.
Nonlinear detection for a high rate extended binary phase shift keying system.
Chen, Xian-Qing; Wu, Le-Nan
2013-03-28
The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding.
Nonlinear Detection for a High Rate Extended Binary Phase Shift Keying System
Chen, Xian-Qing; Wu, Le-Nan
2013-01-01
The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding. PMID:23539034
Relevance Vector Machine Learning for Neonate Pain Intensity Assessment Using Digital Imaging
Gholami, Behnood; Tannenbaum, Allen R.
2011-01-01
Pain assessment in patients who are unable to verbally communicate is a challenging problem. The fundamental limitations in pain assessment in neonates stem from subjective assessment criteria, rather than quantifiable and measurable data. This often results in poor quality and inconsistent treatment of patient pain management. Recent advancements in pattern recognition techniques using relevance vector machine (RVM) learning techniques can assist medical staff in assessing pain by constantly monitoring the patient and providing the clinician with quantifiable data for pain management. The RVM classification technique is a Bayesian extension of the support vector machine (SVM) algorithm, which achieves comparable performance to SVM while providing posterior probabilities for class memberships and a sparser model. If classes represent “pure” facial expressions (i.e., extreme expressions that an observer can identify with a high degree of confidence), then the posterior probability of the membership of some intermediate facial expression to a class can provide an estimate of the intensity of such an expression. In this paper, we use the RVM classification technique to distinguish pain from nonpain in neonates as well as assess their pain intensity levels. We also correlate our results with the pain intensity assessed by expert and nonexpert human examiners. PMID:20172803
NASA Astrophysics Data System (ADS)
Pipień, M.
2008-09-01
We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.
The hippocampal longitudinal axis-relevance for underlying tau and TDP-43 pathology.
Lladó, Albert; Tort-Merino, Adrià; Sánchez-Valle, Raquel; Falgàs, Neus; Balasa, Mircea; Bosch, Beatriz; Castellví, Magda; Olives, Jaume; Antonell, Anna; Hornberger, Michael
2018-06-01
Recent studies suggest that hippocampus has different cortical connectivity and functionality along its longitudinal axis. We sought to elucidate the possible different pattern of atrophy in longitudinal axis of hippocampus between Amyloid/Tau pathology and TDP-43-pathies. Seventy-three presenile subjects were included: Amyloid/Tau group (33 Alzheimer's disease with confirmed cerebrospinal fluid [CSF] biomarkers), probable TDP-43 group (7 semantic variant progressive primary aphasia, 5 GRN and 2 C9orf72 mutation carriers) and 26 healthy controls. We conducted a region-of-interest voxel-based morphometry analysis on the hippocampal longitudinal axis, by contrasting the groups, covarying with CSF biomarkers (Aβ 42 , total tau, p-tau) and covarying with episodic memory scores. Amyloid/Tau pathology affected mainly posterior hippocampus while anterior left hippocampus was more atrophied in probable TDP-43-pathies. We also observed a significant correlation of posterior hippocampal atrophy with Alzheimer's disease CSF biomarkers and visual memory scores. Taken together, these data suggest that there is a potential differentiation along the hippocampal longitudinal axis based on the underlying pathology, which could be used as a potential biomarker to identify the underlying pathology in different neurodegenerative diseases. Copyright © 2018 Elsevier Inc. All rights reserved.
Bayesian soft X-ray tomography using non-stationary Gaussian Processes
NASA Astrophysics Data System (ADS)
Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
Bayesian soft X-ray tomography using non-stationary Gaussian Processes.
Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
Exoplanet Biosignatures: A Framework for Their Assessment.
Catling, David C; Krissansen-Totton, Joshua; Kiang, Nancy Y; Crisp, David; Robinson, Tyler D; DasSarma, Shiladitya; Rushby, Andrew J; Del Genio, Anthony; Bains, William; Domagal-Goldman, Shawn
2018-04-20
Finding life on exoplanets from telescopic observations is an ultimate goal of exoplanet science. Life produces gases and other substances, such as pigments, which can have distinct spectral or photometric signatures. Whether or not life is found with future data must be expressed with probabilities, requiring a framework of biosignature assessment. We present a framework in which we advocate using biogeochemical "Exo-Earth System" models to simulate potential biosignatures in spectra or photometry. Given actual observations, simulations are used to find the Bayesian likelihoods of those data occurring for scenarios with and without life. The latter includes "false positives" wherein abiotic sources mimic biosignatures. Prior knowledge of factors influencing planetary inhabitation, including previous observations, is combined with the likelihoods to give the Bayesian posterior probability of life existing on a given exoplanet. Four components of observation and analysis are necessary. (1) Characterization of stellar (e.g., age and spectrum) and exoplanetary system properties, including "external" exoplanet parameters (e.g., mass and radius), to determine an exoplanet's suitability for life. (2) Characterization of "internal" exoplanet parameters (e.g., climate) to evaluate habitability. (3) Assessment of potential biosignatures within the environmental context (components 1-2), including corroborating evidence. (4) Exclusion of false positives. We propose that resulting posterior Bayesian probabilities of life's existence map to five confidence levels, ranging from "very likely" (90-100%) to "very unlikely" (<10%) inhabited. Key Words: Bayesian statistics-Biosignatures-Drake equation-Exoplanets-Habitability-Planetary science. Astrobiology 18, xxx-xxx.
Castien, René F; van der Windt, Daniëlle A W M; Blankenstein, Annette H; Heymans, Martijn W; Dekker, Joost
2012-04-01
The aims of this study were to describe the course of chronic tension-type headache (CTTH) in participants receiving manual therapy (MT), and to develop a prognostic model for predicting recovery in participants receiving MT. Outcomes in 145 adults with CTTH who received MT as participants in a previously published randomised clinical trial (n=41) or in a prospective cohort study (n=104) were evaluated. Assessments were made at baseline and at 8 and 26 weeks of follow-up. Recovery was defined as a 50% reduction in headache days in combination with a score of 'much improved' or 'very much improved' for global perceived improvement. Potential prognostic factors were analyzed by univariable and multivariable regression analysis. After 8 weeks 78% of the participants reported recovery after MT, and after 26 weeks the frequency of recovered participants was 73%. Prognostic factors related to recovery were co-existing migraine, absence of multiple-site pain, greater cervical range of motion and higher headache intensity. In participants classified as being likely to be recovered, the posterior probability for recovery at 8 weeks was 92%, whereas for those being classified at low probability of recovery this posterior probability was 61%. It is concluded that the course of CTTH is favourable in primary care patients receiving MT. The prognostic models provide additional information to improve prediction of outcome. Copyright © 2012 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
State-space modeling to support management of brucellosis in the Yellowstone bison population
Hobbs, N. Thompson; Geremia, Chris; Treanor, John; Wallen, Rick; White, P.J.; Hooten, Mevin B.; Rhyan, Jack C.
2015-01-01
The bison (Bison bison) of the Yellowstone ecosystem, USA, exemplify the difficulty of conserving large mammals that migrate across the boundaries of conservation areas. Bison are infected with brucellosis (Brucella abortus) and their seasonal movements can expose livestock to infection. Yellowstone National Park has embarked on a program of adaptive management of bison, which requires a model that assimilates data to support management decisions. We constructed a Bayesian state-space model to reveal the influence of brucellosis on the Yellowstone bison population. A frequency-dependent model of brucellosis transmission was superior to a density-dependent model in predicting out-of-sample observations of horizontal transmission probability. A mixture model including both transmission mechanisms converged on frequency dependence. Conditional on the frequency-dependent model, brucellosis median transmission rate was 1.87 yr−1. The median of the posterior distribution of the basic reproductive ratio (R0) was 1.75. Seroprevalence of adult females varied around 60% over two decades, but only 9.6 of 100 adult females were infectious. Brucellosis depressed recruitment; estimated population growth rate λ averaged 1.07 for an infected population and 1.11 for a healthy population. We used five-year forecasting to evaluate the ability of different actions to meet management goals relative to no action. Annually removing 200 seropositive female bison increased by 30-fold the probability of reducing seroprevalence below 40% and increased by a factor of 120 the probability of achieving a 50% reduction in transmission probability relative to no action. Annually vaccinating 200 seronegative animals increased the likelihood of a 50% reduction in transmission probability by fivefold over no action. However, including uncertainty in the ability to implement management by representing stochastic variation in the number of accessible bison dramatically reduced the probability of achieving goals using interventions relative to no action. Because the width of the posterior predictive distributions of future population states expands rapidly with increases in the forecast horizon, managers must accept high levels of uncertainty. These findings emphasize the necessity of iterative, adaptive management with relatively short-term commitment to action and frequent reevaluation in response to new data and model forecasts. We believe our approach has broad applications.
A search for hep solar neutrinos at the Sudbury Neutrino Observatory
NASA Astrophysics Data System (ADS)
Winchester, Timothy J.
Solar neutrinos from the fusion hep reaction, (helium-3 fusing with a proton to become helium-4, releasing a positron and neutrino), have previously remained undetected due to their flux being about one one-thousandth that of boron-8 neutrinos. These neutrinos are interesting theoretically because they are less dependent on solar composition than other solar neutrinos, and therefore provide a somewhat independent test of the Standard Solar Model. In this analysis, we develop a new event fitter for existing data from the Sudbury Neutrino Observatory. We also use the fitter to remove backgrounds that previously limited the fiducial volume, which we increase by 30%. We use a modified Wald-Wolfowitz test to increase the amount of live time by 200 days (18%) and show that this data is consistent with the previously-used data. Finally, we develop a Bayesian analysis technique to make full use of the posterior distributions of energy returned by the event fitter. In the first significant detection of hep neutrinos, we find that the most-probable rate of hep events is 3.5 x 10. 4 /cm. 2/s, which is significantly higher than the theoretical prediction. We find that the 95% credible region extends from 1.0 to 7.2 x 10. 4 /cm. 2/s, and that we can therefore exclude a rate of 0 hep events at greater than 95% probability.
Liu, Jie; Zhuang, Xiahai; Wu, Lianming; An, Dongaolei; Xu, Jianrong; Peters, Terry; Gu, Lixu
2017-11-01
Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients. Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients.
Kuragano, Masahiro; Murakami, Yota; Takahashi, Masayuki
2018-03-25
Nonmuscle myosin II (NMII) plays an essential role in directional cell migration. In this study, we investigated the roles of NMII isoforms (NMIIA and NMIIB) in the migration of human embryonic lung fibroblasts, which exhibit directionally persistent migration in an intrinsic manner. NMIIA-knockdown (KD) cells migrated unsteadily, but their direction of migration was approximately maintained. By contrast, NMIIB-KD cells occasionally reversed their direction of migration. Lamellipodium-like protrusions formed in the posterior region of NMIIB-KD cells prior to reversal of the migration direction. Moreover, NMIIB KD led to elongation of the posterior region in migrating cells, probably due to the lack of load-bearing stress fibers in this area. These results suggest that NMIIA plays a role in steering migration by maintaining stable protrusions in the anterior region, whereas NMIIB plays a role in maintenance of front-rear polarity by preventing aberrant protrusion formation in the posterior region. These distinct functions of NMIIA and NMIIB might promote intrinsic and directed migration of normal human fibroblasts. Copyright © 2018 Elsevier Inc. All rights reserved.
RadVel: The Radial Velocity Modeling Toolkit
NASA Astrophysics Data System (ADS)
Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan
2018-04-01
RadVel is an open-source Python package for modeling Keplerian orbits in radial velocity (RV) timeseries. RadVel provides a convenient framework to fit RVs using maximum a posteriori optimization and to compute robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel allows users to float or fix parameters, impose priors, and perform Bayesian model comparison. We have implemented real-time MCMC convergence tests to ensure adequate sampling of the posterior. RadVel can output a number of publication-quality plots and tables. Users may interface with RadVel through a convenient command-line interface or directly from Python. The code is object-oriented and thus naturally extensible. We encourage contributions from the community. Documentation is available at http://radvel.readthedocs.io.
Pippi — Painless parsing, post-processing and plotting of posterior and likelihood samples
NASA Astrophysics Data System (ADS)
Scott, Pat
2012-11-01
Interpreting samples from likelihood or posterior probability density functions is rarely as straightforward as it seems it should be. Producing publication-quality graphics of these distributions is often similarly painful. In this short note I describe pippi, a simple, publicly available package for parsing and post-processing such samples, as well as generating high-quality PDF graphics of the results. Pippi is easily and extensively configurable and customisable, both in its options for parsing and post-processing samples, and in the visual aspects of the figures it produces. I illustrate some of these using an existing supersymmetric global fit, performed in the context of a gamma-ray search for dark matter. Pippi can be downloaded and followed at http://github.com/patscott/pippi.
McCracken, D Jay; Lovasik, Brendan P; McCracken, Courtney E; Caplan, Justin M; Turan, Nefize; Nogueira, Raul G; Cawley, C Michael; Dion, Jacques E; Tamargo, Rafael J; Barrow, Daniel L; Pradilla, Gustavo
2015-12-01
Previous studies have attempted to determine the best treatment for oculomotor nerve palsy (ONP) secondary to posterior communicating artery (PCoA) aneurysms, but have been limited by small sample sizes and limited treatment. To analyze the treatment of ONP secondary to PCoA with both coiling and clipping in ruptured and unruptured aneurysms. Data from 2 large academic centers was retrospectively collected over 22 years, yielding a total of 93 patients with ONP secondary to PCoA aneurysms. These patients were combined with 321 patients from the literature review for large data analyses. Onset symptoms, recovery, and time to resolution were evaluated with respect to treatment and aneurysm rupture status. For all patients presenting with ONP (n = 414) 56.6% of those treated with microsurgical clipping made a full recovery vs 41.5% of those treated with endovascular coil embolization (P = .02). Of patients with a complete ONP (n = 229), full recovery occurred in 47.3% of those treated with clipping but in only 20% of those undergoing coiling (P = .01). For patients presenting with ruptured aneurysms (n = 130), full recovery occurred in 70.9% compared with 49.3% coiled patients (P = .01). Additionally, although patients with full ONP recovery had a median time to treatment of 4 days, those without full ONP recovery had a median time to treatment of 7 days (P = .01). Patients with ONP secondary to PCoA aneurysms treated with clipping showed higher rates of full ONP resolution than patients treated with coil embolization. Larger prospective studies are needed to determine the true potential of recovery associated with each treatment. EUH, Emory University HospitalIQR, interquartile rangeJHU, Johns Hopkins UniversitymRS, modified Rankin ScaleONP, oculomotor nerve palsyPCoA, posterior communicating arterySAH, subarachnoid hemorrhage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, D.R.; Tse, D.T.; Anderson, R.L.
1990-01-01
Reconstruction of full thickness eyelid defects requires the correction of both posterior lamella (tarsus, conjunctiva) and anterior lamella (skin, muscle). Tarsal substitutes including banked sclera, nasal cartilage, ear cartilage, and periosteum can be beneficial for posterior lamellar repair, while anterior lamellar replacement, including skin grafts, pedicle flaps, advancement flaps, etc., is important to cover the posterior reconstructed portion. At times, due to extensive tissue loss, the eyelid reconstruction can be particularly challenging. We have found an alternative posterior lamellar reconstructive technique utilizing irradiated homologous tarsal plate that can be particularly useful in selected cases of severe tissue loss. The experimentalmore » surgical procedure in monkeys and the histological fate of the implanted tarsus is described in Part I, and followed in Part II by our experience with this tissue in six human patients.« less
In vivo determination of total knee arthroplasty kinematics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Komistek, Richard D; Mahfouz, Mohamed R; Bertin, Kim
2008-01-01
The objective of this study was to determine if consistent posterior femoral rollback of an asymmetrical posterior cruciate retaining (PCR) total knee arthroplasty was mostly influenced by the implant design, surgical technique, or presence of a well-functioning posterior cruciate ligament (PCL). Three-dimensional femorotibial kinematics was determined for 80 subjects implanted by 3 surgeons, and each subject was evaluated under fluoroscopic surveillance during a deep knee bend. All subjects in this present study having an intact PCL had a well-functioning PCR knee and experienced normal kinematic patterns, although less in magnitude than the normal knee. In addition, a surprising finding wasmore » that, on average, subjects without a PCL still achieved posterior femoral rollback from full extension to maximum knee flexion. The findings in this study revealed that implant design did contribute to the normal kinematics demonstrated by subjects having this asymmetrical PCR total knee arthroplasty.« less
Lohrer, Heinz; Arentz, Sabine
2004-04-01
A case history of a 25-year-old field hockey player, a member of the German National Field Hockey Team, is presented. The patient could not remember any specific ankle injury, but since the World Indoor Championship in February 2003, he experienced significant but diffuse pain around the posterior ankle, especially while loading the forefoot in hockey training and competition. For 2 months, the patient was unable to run. Conservative treatment failed, and surgery was performed. Posterior ankle arthroscopy revealed a frayed posterior intermalleolar ligament and meniscoid-like scar tissue at the posterolateral ankle, indicating a posterolateral soft tissue ankle impingement syndrome. A concomitant inflammation of the posterolateral ankle and subtalar synovium was present. After arthroscopic resection and early functional aftertreatment, the patient returned to full high-level sports ability within 2 months.
Yamazaki, M; Akazawa, T; Okawa, A; Koda, M
2007-03-01
Case report. To report a case with giant cell tumor (GCT) of C6 vertebra, in which three-dimensional (3-D) full-scale modeling of the cervical spine was useful for preoperative planning and intraoperative navigation. A university hospital in Japan. A 27-year-old man with a GCT involving the C6 vertebra presented with severe neck pain. The C6 vertebra was collapsed and the tumor had infiltrated around both vertebral arteries (VAs). A single-stage operation combining anterior and posterior surgical procedures was scheduled to resect the tumor and stabilize the spine. To evaluate the anatomic structures within the surgical fields, we produced a 3-D full-scale model from the computed tomography angiography data. The 3-D full-scale model clearly showed the relationships between the destroyed C6 vertebra and the deviations in the courses of both VAs. Using the model, we were able to identify the anatomic landmarks around the VAs during anterior surgery and to successfully resect the tumor. During the posterior surgery, we were able to determine accurate starting points for the pedicle screws. Anterior iliac bone graft from C5 to C7 and posterior fixation with a rod and screw system from C4 to T2 were performed without any complications. Postoperatively, the patient experienced relief of his neck pain. The 3-D full-scale model was useful for simultaneously evaluating the destruction of the vertebral bony structures and the deviations in the courses of the VAs during surgery for GCT involving the cervical spine.
1983-09-01
Ciencia y Tecnologia -Mexico, by ONR under Contract No. N00014-77-C-0675, and by ARO under Contract No. DAAG29-80-K-0042. LUJ THE VIE~W, rTIJ. ’~v ’’~c...Department of Statis- tics. For financial support I thank the Consejo Nacional de Ciencia y Tecnologia - Mexico, and the Department of Statistics of the
A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction
Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R.; Buenrostro-Mariscal, Raymundo
2017-01-01
There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. PMID:28391241
A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R; Buenrostro-Mariscal, Raymundo
2017-06-07
There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. Copyright © 2017 Montesinos-López et al.
Finite element model updating using the shadow hybrid Monte Carlo technique
NASA Astrophysics Data System (ADS)
Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.
2015-02-01
Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.
Sihota, Ramanjit; Goyal, Amita; Kaur, Jasbir; Gupta, Viney; Nag, Tapas C
2012-01-01
To study ultrastructural changes of the trabecular meshwork in acute and chronic primary angle closure glaucoma (PACG) and primary open angle glaucoma (POAG) eyes by scanning electron microscopy. Twenty-one trabecular meshwork surgical specimens from consecutive glaucomatous eyes after a trabeculectomy and five postmortem corneoscleral specimens were fixed immediately in Karnovsky solution. The tissues were washed in 0.1 M phosphate buffer saline, post-fixed in 1% osmium tetraoxide, dehydrated in acetone series (30-100%), dried and mounted. Normal trabecular tissue showed well-defined, thin, cylindrical uveal trabecular beams with many large spaces, overlying flatter corneoscleral beams and numerous smaller spaces. In acute PACG eyes, the trabecular meshwork showed grossly swollen, irregular trabecular endothelial cells with intercellular and occasional basal separation with few spaces. Numerous activated macrophages, leucocytes and amorphous debris were present. Chronic PACG eyes had a few, thickened posterior uveal trabecular beams visible. A homogenous deposit covered the anterior uveal trabeculae and spaces. Converging, fan-shaped trabecular beam configuration corresponded to gonioscopic areas of peripheral anterior synechiae. In POAG eyes, anterior uveal trabecular beams were thin and strap-like, while those posteriorly were wide, with a homogenous deposit covering and bridging intertrabecular spaces, especially posteriorly. Underlying corneoscleral trabecular layers and spaces were visualized in some areas. In acute PACG a marked edema of the endothelium probably contributes for the acute and marked intraocular pressure (IOP) elevation. Chronically raised IOP in chronic PACG and POAG probably results, at least in part, from decreased aqueous outflow secondary to widening and fusion of adjacent trabecular beams, together with the homogenous deposit enmeshing trabecular beams and spaces.
Structural Information from Single-molecule FRET Experiments Using the Fast Nano-positioning System
Röcker, Carlheinz; Nagy, Julia; Michaelis, Jens
2017-01-01
Single-molecule Förster Resonance Energy Transfer (smFRET) can be used to obtain structural information on biomolecular complexes in real-time. Thereby, multiple smFRET measurements are used to localize an unknown dye position inside a protein complex by means of trilateration. In order to obtain quantitative information, the Nano-Positioning System (NPS) uses probabilistic data analysis to combine structural information from X-ray crystallography with single-molecule fluorescence data to calculate not only the most probable position but the complete three-dimensional probability distribution, termed posterior, which indicates the experimental uncertainty. The concept was generalized for the analysis of smFRET networks containing numerous dye molecules. The latest version of NPS, Fast-NPS, features a new algorithm using Bayesian parameter estimation based on Markov Chain Monte Carlo sampling and parallel tempering that allows for the analysis of large smFRET networks in a comparably short time. Moreover, Fast-NPS allows the calculation of the posterior by choosing one of five different models for each dye, that account for the different spatial and orientational behavior exhibited by the dye molecules due to their local environment. Here we present a detailed protocol for obtaining smFRET data and applying the Fast-NPS. We provide detailed instructions for the acquisition of the three input parameters of Fast-NPS: the smFRET values, as well as the quantum yield and anisotropy of the dye molecules. Recently, the NPS has been used to elucidate the architecture of an archaeal open promotor complex. This data is used to demonstrate the influence of the five different dye models on the posterior distribution. PMID:28287526
Structural Information from Single-molecule FRET Experiments Using the Fast Nano-positioning System.
Dörfler, Thilo; Eilert, Tobias; Röcker, Carlheinz; Nagy, Julia; Michaelis, Jens
2017-02-09
Single-molecule Förster Resonance Energy Transfer (smFRET) can be used to obtain structural information on biomolecular complexes in real-time. Thereby, multiple smFRET measurements are used to localize an unknown dye position inside a protein complex by means of trilateration. In order to obtain quantitative information, the Nano-Positioning System (NPS) uses probabilistic data analysis to combine structural information from X-ray crystallography with single-molecule fluorescence data to calculate not only the most probable position but the complete three-dimensional probability distribution, termed posterior, which indicates the experimental uncertainty. The concept was generalized for the analysis of smFRET networks containing numerous dye molecules. The latest version of NPS, Fast-NPS, features a new algorithm using Bayesian parameter estimation based on Markov Chain Monte Carlo sampling and parallel tempering that allows for the analysis of large smFRET networks in a comparably short time. Moreover, Fast-NPS allows the calculation of the posterior by choosing one of five different models for each dye, that account for the different spatial and orientational behavior exhibited by the dye molecules due to their local environment. Here we present a detailed protocol for obtaining smFRET data and applying the Fast-NPS. We provide detailed instructions for the acquisition of the three input parameters of Fast-NPS: the smFRET values, as well as the quantum yield and anisotropy of the dye molecules. Recently, the NPS has been used to elucidate the architecture of an archaeal open promotor complex. This data is used to demonstrate the influence of the five different dye models on the posterior distribution.
Approximate Bayesian estimation of extinction rate in the Finnish Daphnia magna metapopulation.
Robinson, John D; Hall, David W; Wares, John P
2013-05-01
Approximate Bayesian computation (ABC) is useful for parameterizing complex models in population genetics. In this study, ABC was applied to simultaneously estimate parameter values for a model of metapopulation coalescence and test two alternatives to a strict metapopulation model in the well-studied network of Daphnia magna populations in Finland. The models shared four free parameters: the subpopulation genetic diversity (θS), the rate of gene flow among patches (4Nm), the founding population size (N0) and the metapopulation extinction rate (e) but differed in the distribution of extinction rates across habitat patches in the system. The three models had either a constant extinction rate in all populations (strict metapopulation), one population that was protected from local extinction (i.e. a persistent source), or habitat-specific extinction rates drawn from a distribution with specified mean and variance. Our model selection analysis favoured the model including a persistent source population over the two alternative models. Of the closest 750,000 data sets in Euclidean space, 78% were simulated under the persistent source model (estimated posterior probability = 0.769). This fraction increased to more than 85% when only the closest 150,000 data sets were considered (estimated posterior probability = 0.774). Approximate Bayesian computation was then used to estimate parameter values that might produce the observed set of summary statistics. Our analysis provided posterior distributions for e that included the point estimate obtained from previous data from the Finnish D. magna metapopulation. Our results support the use of ABC and population genetic data for testing the strict metapopulation model and parameterizing complex models of demography. © 2013 Blackwell Publishing Ltd.
Fanshawe, T. R.
2015-01-01
There are many examples from the scientific literature of visual search tasks in which the length, scope and success rate of the search have been shown to vary according to the searcher's expectations of whether the search target is likely to be present. This phenomenon has major practical implications, for instance in cancer screening, when the prevalence of the condition is low and the consequences of a missed disease diagnosis are severe. We consider this problem from an empirical Bayesian perspective to explain how the effect of a low prior probability, subjectively assessed by the searcher, might impact on the extent of the search. We show how the searcher's posterior probability that the target is present depends on the prior probability and the proportion of possible target locations already searched, and also consider the implications of imperfect search, when the probability of false-positive and false-negative decisions is non-zero. The theoretical results are applied to two studies of radiologists' visual assessment of pulmonary lesions on chest radiographs. Further application areas in diagnostic medicine and airport security are also discussed. PMID:26587267
Multiple Neural Mechanisms of Decision Making and Their Competition under Changing Risk Pressure
Kolling, Nils; Wittmann, Marco; Rushworth, Matthew F.S.
2014-01-01
Summary Sometimes when a choice is made, the outcome is not guaranteed and there is only a probability of its occurrence. Each individual’s attitude to probability, sometimes called risk proneness or aversion, has been assumed to be static. Behavioral ecological studies, however, suggest such attitudes are dynamically modulated by the context an organism finds itself in; in some cases, it may be optimal to pursue actions with a low probability of success but which are associated with potentially large gains. We show that human subjects rapidly adapt their use of probability as a function of current resources, goals, and opportunities for further foraging. We demonstrate that dorsal anterior cingulate cortex (dACC) carries signals indexing the pressure to pursue unlikely choices and signals related to the taking of such choices. We show that dACC exerts this control over behavior when it, rather than ventromedial prefrontal cortex, interacts with posterior cingulate cortex. PMID:24607236
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korah, Mariam P., E-mail: mariam.philip@gmail.com; Deyrup, Andrea T.; Monson, David K.
2012-02-01
Purpose: To examine the influence of anatomic location in the upper extremity (UE) vs. lower extremity (LE) on the presentation and outcomes of adult soft tissue sarcomas (STS). Methods and Materials: From 2001 to 2008, 118 patients underwent limb-sparing surgery (LSS) and external beam radiotherapy (RT) with curative intent for nonrecurrent extremity STS. RT was delivered preoperatively in 96 and postoperatively in 22 patients. Lesions arose in the UE in 28 and in the LE in 90 patients. Patients with UE lesions had smaller tumors (4.5 vs. 9.0 cm, p < 0.01), were more likely to undergo a prior excisionmore » (43 vs. 22%, p = 0.03), to have close or positive margins after resection (71 vs. 49%, p = 0.04), and to undergo postoperative RT (32 vs. 14%, p = 0.04). Results: Five-year actuarial local recurrence-free and distant metastasis-free survival rates for the entire group were 85 and 74%, with no difference observed between the UE and LE cohorts. Five-year actuarial probability of wound reoperation rates were 4 vs. 29% (p < 0.01) in the UE and LE respectively. Thigh lesions accounted for 84% of the required wound reoperations. The distribution of tumors within the anterior, medial, and posterior thigh compartments was 51%, 26%, and 23%. Subset analysis by compartment showed no difference in the probability of wound reoperation between the anterior and medial/posterior compartments (29 vs. 30%, p = 0.68). Neurolysis was performed during resection in (15%, 5%, and 67%, p < 0.01) of tumors in the anterior, medial, and posterior compartments. Conclusions: Tumors in the UE and LE differ significantly with respect to size and management details. The anatomy of the UE poses technical impediments to an R0 resection. Thigh tumors are associated with higher wound reoperation rates. Tumor resection in the posterior thigh compartment is more likely to result in nerve injury. A better understanding of the inherent differences between tumors in various extremity sites will assist in individualizing treatment.« less
Sun, Chuan-bin; You, Yong-sheng; Liu, Zhe; Zheng, Lin-yan; Chen, Pei-qing; Yao, Ke; Xue, An-quan
2016-01-01
To investigate the morphological characteristics of myopic macular retinoschisis (MRS) in teenagers with high myopia, six male (9 eyes) and 3 female (4 eyes) teenagers with typical MRS identified from chart review were evaluated. All cases underwent complete ophthalmic examinations including best corrected visual acuity (BCVA), indirect ophthalmoscopy, colour fundus photography, B-type ultrasonography, axial length measurement, and spectral-domain optical coherence tomography (SD-OCT). The average age was 17.8 ± 1.5 years, average refractive error was −17.04 ± 3.04D, average BCVA was 0.43 ± 0.61, and average axial length was 30.42 ± 1.71 mm. Myopic macular degenerative changes (MDC) by colour fundus photographs revealed Ohno-Matsui Category 1 in 4 eyes, and Category 2 in 9 eyes. Posterior staphyloma was found in 9 eyes. SD-OCT showed outer MRS in all 13 eyes, internal limiting membrane detachment in 7 eyes, vascular microfolds in 2 eyes, and inner MRS in 1 eye. No premacular structures such as macular epiretinal membrane or partially detached posterior hyaloids were found. Our results showed that MRS rarely occurred in highly myopic teenagers, and was not accompanied by premacular structures, severe MDC, or even obvious posterior staphyloma. This finding indicates that posterior scleral expansion is probably the main cause of MRS. PMID:27294332
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
NASA Astrophysics Data System (ADS)
Pan, J.; Durand, M. T.; Vanderjagt, B. J.
2015-12-01
Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.
Reddy, Vivek Y; Sievert, Horst; Halperin, Jonathan; Doshi, Shephal K; Buchbinder, Maurice; Neuzil, Petr; Huber, Kenneth; Whisenant, Brian; Kar, Saibal; Swarup, Vijay; Gordon, Nicole; Holmes, David
2014-11-19
While effective in preventing stroke in patients with atrial fibrillation (AF), warfarin is limited by a narrow therapeutic profile, a need for lifelong coagulation monitoring, and multiple drug and diet interactions. To determine whether a local strategy of mechanical left atrial appendage (LAA) closure was noninferior to warfarin. PROTECT AF was a multicenter, randomized (2:1), unblinded, Bayesian-designed study conducted at 59 hospitals of 707 patients with nonvalvular AF and at least 1 additional stroke risk factor (CHADS2 score ≥1). Enrollment occurred between February 2005 and June 2008 and included 4-year follow-up through October 2012. Noninferiority required a posterior probability greater than 97.5% and superiority a probability of 95% or greater; the noninferiority margin was a rate ratio of 2.0 comparing event rates between treatment groups. Left atrial appendage closure with the device (n = 463) or warfarin (n = 244; target international normalized ratio, 2-3). A composite efficacy end point including stroke, systemic embolism, and cardiovascular/unexplained death, analyzed by intention-to-treat. At a mean (SD) follow-up of 3.8 (1.7) years (2621 patient-years), there were 39 events among 463 patients (8.4%) in the device group for a primary event rate of 2.3 events per 100 patient-years, compared with 34 events among 244 patients (13.9%) for a primary event rate of 3.8 events per 100 patient-years with warfarin (rate ratio, 0.60; 95% credible interval, 0.41-1.05), meeting prespecified criteria for both noninferiority (posterior probability, >99.9%) and superiority (posterior probability, 96.0%). Patients in the device group demonstrated lower rates of both cardiovascular mortality (1.0 events per 100 patient-years for the device group [17/463 patients, 3.7%] vs 2.4 events per 100 patient-years with warfarin [22/244 patients, 9.0%]; hazard ratio [HR], 0.40; 95% CI, 0.21-0.75; P = .005) and all-cause mortality (3.2 events per 100 patient-years for the device group [57/466 patients, 12.3%] vs 4.8 events per 100 patient-years with warfarin [44/244 patients, 18.0%]; HR, 0.66; 95% CI, 0.45-0.98; P = .04). After 3.8 years of follow-up among patients with nonvalvular AF at elevated risk for stroke, percutaneous LAA closure met criteria for both noninferiority and superiority, compared with warfarin, for preventing the combined outcome of stroke, systemic embolism, and cardiovascular death, as well as superiority for cardiovascular and all-cause mortality. clinicaltrials.gov Identifier: NCT00129545.
Skinner, Sarah
2012-11-01
Magnetic resonance imaging (MRI) is the gold standard in noninvasive investigation of knee pain. It has a very high negative predictive value and may assist in avoiding unnecessary knee arthroscopy; its accuracy in the diagnosis of meniscal and anterior cruciate ligament (ACL) tears is greater than 89%; it has a greater than 90% sensitivity for the detection of medial meniscal tears; and it is probably better at assessing the posterior horn than arthroscopy.
Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.
2016-01-01
This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.
A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.
Suk, Heung-Il; Lee, Seong-Whan
2013-02-01
As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.
Mashimo, Yuta; Fukui, Makiko; Machida, Ryuichiro
2016-11-01
The egg structure of Paterdecolyus yanbarensis was examined using light, scanning electron and transmission electron microscopy. The egg surface shows a distinct honeycomb pattern formed by exochorionic ridges. Several micropyles are clustered on the ventral side of the egg. The egg membrane is composed of an exochorion penetrated with numerous aeropyles, an endochorion, and an extremely thin vitelline membrane. The endochorion is thickened at the posterior egg pole, probably associated with water absorption. A comparison of egg structure among Orthoptera revealed that the micropylar distribution pattern is conserved in Ensifera and Caelifera and might be regarded as a groundplan feature for each group; in Ensifera, multiple micropyles are clustered on the ventral side of the egg, whereas in Caelifera, micropyles are arranged circularly around the posterior pole of the egg. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Granade, Christopher; Wiebe, Nathan
2017-08-01
A major challenge facing existing sequential Monte Carlo methods for parameter estimation in physics stems from the inability of existing approaches to robustly deal with experiments that have different mechanisms that yield the results with equivalent probability. We address this problem here by proposing a form of particle filtering that clusters the particles that comprise the sequential Monte Carlo approximation to the posterior before applying a resampler. Through a new graphical approach to thinking about such models, we are able to devise an artificial-intelligence based strategy that automatically learns the shape and number of the clusters in the support of the posterior. We demonstrate the power of our approach by applying it to randomized gap estimation and a form of low circuit-depth phase estimation where existing methods from the physics literature either exhibit much worse performance or even fail completely.
The right parietal cortex and time perception: back to Critchley and the Zeitraffer phenomenon.
Alexander, Iona; Cowey, Alan; Walsh, Vincent
2005-05-01
We investigated the involvement of the posterior parietal cortex in time perception by temporarily disrupting normal functioning in this region, in subjects making prospective judgements of time or pitch. Disruption of the right posterior parietal cortex significantly slowed reaction times when making time, but not pitch, judgements. Similar interference with the left parietal cortex and control stimulation over the vertex did not significantly change performance on either pitch or time tasks. The results show that the information processing necessary for temporal judgements involves the parietal cortex, probably to optimise spatiotemporal accuracy in voluntary action. The results are in agreement with a recent neuroimaging study and are discussed with regard to a psychological model of temporal processing and a recent proposal that time is part of a parietal cortex system for encoding magnitude information relevant for action.
Fordyce, R. Ewan
2015-01-01
The Eocene history of cetacean evolution is now represented by the expansive fossil record of archaeocetes elucidating major morphofunctional shifts relating to the land to sea transition, but the change from archaeocetes to modern cetaceans is poorly established. New fossil material of the recently recognized family Eomysticetidae from the upper Oligocene Otekaike Limestone includes a new genus and species, Waharoa ruwhenua, represented by skulls and partial skeletons of an adult, juvenile, and a smaller juvenile. Ontogenetic status is confirmed by osteohistology of ribs. Waharoa ruwhenua is characterized by an elongate and narrow rostrum which retains vestigial alveoli and alveolar grooves. Palatal foramina and sulci are present only on the posterior half of the palate. The nasals are elongate, and the bony nares are positioned far anteriorly. Enormous temporal fossae are present adjacent to an elongate and narrow intertemporal region with a sharp sagittal crest. The earbones are characterized by retaining inner and outer posterior pedicles, lacking fused posterior processes, and retaining a separate accessory ossicle. Phylogenetic analysis supports inclusion of Waharoa ruwhenua within a monophyletic Eomysticetidae as the earliest diverging clade of toothless mysticetes. This eomysticetid clade also included Eomysticetus whitmorei, Micromysticetus rothauseni, Tohoraata raekohao, Tokarahia kauaeroa, Tokarahia lophocephalus, and Yamatocetus canaliculatus. Detailed study of ontogenetic change demonstrates postnatal elaboration of the sagittal and nuchal crests, elongation of the intertemporal region, inflation of the zygomatic processes, and an extreme proportional increase in rostral length. Tympanic bullae are nearly full sized during early postnatal ontogeny indicating precocial development of auditory structures, but do increase slightly in size. Positive allometry of the rostrum suggests an ontogenetic change in feeding ecology, from neonatal suckling to a more specialized adult feeding behaviour. Possible absence of baleen anteriorly, a delicate temporomandibular joint with probable synovial capsule, non-laterally deflected coronoid process, and anteroposteriorly expanded palate suggests skim feeding as likely mode of adult feeding for zooplankton. Isotopic data in concert with preservation of young juveniles suggests the continental shelf of Zealandia was an important calving ground for latitudinally migrating Oligocene baleen whales. PMID:26380800
Boessenecker, Robert W; Fordyce, R Ewan
2015-01-01
The Eocene history of cetacean evolution is now represented by the expansive fossil record of archaeocetes elucidating major morphofunctional shifts relating to the land to sea transition, but the change from archaeocetes to modern cetaceans is poorly established. New fossil material of the recently recognized family Eomysticetidae from the upper Oligocene Otekaike Limestone includes a new genus and species, Waharoa ruwhenua, represented by skulls and partial skeletons of an adult, juvenile, and a smaller juvenile. Ontogenetic status is confirmed by osteohistology of ribs. Waharoa ruwhenua is characterized by an elongate and narrow rostrum which retains vestigial alveoli and alveolar grooves. Palatal foramina and sulci are present only on the posterior half of the palate. The nasals are elongate, and the bony nares are positioned far anteriorly. Enormous temporal fossae are present adjacent to an elongate and narrow intertemporal region with a sharp sagittal crest. The earbones are characterized by retaining inner and outer posterior pedicles, lacking fused posterior processes, and retaining a separate accessory ossicle. Phylogenetic analysis supports inclusion of Waharoa ruwhenua within a monophyletic Eomysticetidae as the earliest diverging clade of toothless mysticetes. This eomysticetid clade also included Eomysticetus whitmorei, Micromysticetus rothauseni, Tohoraata raekohao, Tokarahia kauaeroa, Tokarahia lophocephalus, and Yamatocetus canaliculatus. Detailed study of ontogenetic change demonstrates postnatal elaboration of the sagittal and nuchal crests, elongation of the intertemporal region, inflation of the zygomatic processes, and an extreme proportional increase in rostral length. Tympanic bullae are nearly full sized during early postnatal ontogeny indicating precocial development of auditory structures, but do increase slightly in size. Positive allometry of the rostrum suggests an ontogenetic change in feeding ecology, from neonatal suckling to a more specialized adult feeding behaviour. Possible absence of baleen anteriorly, a delicate temporomandibular joint with probable synovial capsule, non-laterally deflected coronoid process, and anteroposteriorly expanded palate suggests skim feeding as likely mode of adult feeding for zooplankton. Isotopic data in concert with preservation of young juveniles suggests the continental shelf of Zealandia was an important calving ground for latitudinally migrating Oligocene baleen whales.
Othenin-Girard, V; Boulvain, M; Guittier, M-J
2018-02-01
To describe the maternal and foetal outcomes of an occiput posterior foetal position at delivery; to evaluate predictive factors of anterior rotation during labour. Descriptive retrospective analysis of a cohort of 439 women with foetuses in occiput posterior position during labour. Logistic regression analysis to quantify the effect of factors that may favour anterior rotation. Most of foetuses (64%) do an anterior rotation during labour and 13% during the expulsive phase. The consequences of a persistent foetal occiput posterior position during delivery are a significantly increased average time of second stage labour compared to others positions (65.19minutes vs. 43.29, P=0.001, respectively); a higher percentage of caesarean sections (72.0% versus 4.7%, P<0.001) and instrumental delivery (among low-birth deliveries, 60.7% versus 25.2%, P<0.001); more frequent third-degree perineal tears (14.3% vs. 0.6%, P<0.001) and more abundant blood loss (560mL versus 344mL, P<0.001). In a multi-variable model including nulliparity, station of the presenting part and degree of flexion of the foetal head at complete dilatation, the only predictive factor independent of rotation at delivery is a good flexion of the foetal head at complete dilatation, which multiplies the anterior rotation probability by six. A good flexion of the foetal head is significantly associated with anterior rotation. Other studies exploring ways to increase anterior rotation during labour are needed to reduce the very high risk of caesarean section and instrumentation associated with the foetal occiput posterior position. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Fayed, Nicolás; Modrego, Pedro J; García-Martí, Gracián; Sanz-Requena, Roberto; Marti-Bonmatí, Luis
2017-05-01
To assess the accuracy of magnetic resonance spectroscopy (1H-MRS) and brain volumetry in mild cognitive impairment (MCI) to predict conversion to probable Alzheimer's disease (AD). Forty-eight patients fulfilling the criteria of amnestic MCI who underwent a conventional magnetic resonance imaging (MRI) followed by MRS, and T1-3D on 1.5 Tesla MR unit. At baseline the patients underwent neuropsychological examination. 1H-MRS of the brain was carried out by exploring the left medial occipital lobe and ventral posterior cingulated cortex (vPCC) using the LCModel software. A high resolution T1-3D sequence was acquired to carry out the volumetric measurement. A cortical and subcortical parcellation strategy was used to obtain the volumes of each area within the brain. The patients were followed up to detect conversion to probable AD. After a 3-year follow-up, 15 (31.2%) patients converted to AD. The myo-inositol in the occipital cortex and glutamate+glutamine (Glx) in the posterior cingulate cortex predicted conversion to probable AD at 46.1% sensitivity and 90.6% specificity. The positive predictive value was 66.7%, and the negative predictive value was 80.6%, with an overall cross-validated classification accuracy of 77.8%. The volume of the third ventricle, the total white matter and entorhinal cortex predict conversion to probable AD at 46.7% sensitivity and 90.9% specificity. The positive predictive value was 70%, and the negative predictive value was 78.9%, with an overall cross-validated classification accuracy of 77.1%. Combining volumetric measures in addition to the MRS measures the prediction to probable AD has a 38.5% sensitivity and 87.5% specificity, with a positive predictive value of 55.6%, a negative predictive value of 77.8% and an overall accuracy of 73.3%. Either MRS or brain volumetric measures are markers separately of cognitive decline and may serve as a noninvasive tool to monitor cognitive changes and progression to dementia in patients with amnestic MCI, but the results do not support the routine use in the clinical settings. Copyright © 2016 Elsevier Inc. All rights reserved.
Inference of reaction rate parameters based on summary statistics from experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Inference of reaction rate parameters based on summary statistics from experiments
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...
2016-10-15
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Multi-Target Tracking Using an Improved Gaussian Mixture CPHD Filter.
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-11-23
The cardinalized probability hypothesis density (CPHD) filter is an alternative approximation to the full multi-target Bayesian filter for tracking multiple targets. However, although the joint propagation of the posterior intensity and cardinality distribution in its recursion allows more reliable estimates of the target number than the PHD filter, the CPHD filter suffers from the spooky effect where there exists arbitrary PHD mass shifting in the presence of missed detections. To address this issue in the Gaussian mixture (GM) implementation of the CPHD filter, this paper presents an improved GM-CPHD filter, which incorporates a weight redistribution scheme into the filtering process to modify the updated weights of the Gaussian components when missed detections occur. In addition, an efficient gating strategy that can adaptively adjust the gate sizes according to the number of missed detections of each Gaussian component is also presented to further improve the computational efficiency of the proposed filter. Simulation results demonstrate that the proposed method offers favorable performance in terms of both estimation accuracy and robustness to clutter and detection uncertainty over the existing methods.
Cavagnaro, Daniel R; Myung, Jay I; Pitt, Mark A; Kujala, Janne V
2010-04-01
Discriminating among competing statistical models is a pressing issue for many experimentalists in the field of cognitive science. Resolving this issue begins with designing maximally informative experiments. To this end, the problem to be solved in adaptive design optimization is identifying experimental designs under which one can infer the underlying model in the fewest possible steps. When the models under consideration are nonlinear, as is often the case in cognitive science, this problem can be impossible to solve analytically without simplifying assumptions. However, as we show in this letter, a full solution can be found numerically with the help of a Bayesian computational trick derived from the statistics literature, which recasts the problem as a probability density simulation in which the optimal design is the mode of the density. We use a utility function based on mutual information and give three intuitive interpretations of the utility function in terms of Bayesian posterior estimates. As a proof of concept, we offer a simple example application to an experiment on memory retention.
NASA Astrophysics Data System (ADS)
Nawaz, Muhammad Atif; Curtis, Andrew
2018-04-01
We introduce a new Bayesian inversion method that estimates the spatial distribution of geological facies from attributes of seismic data, by showing how the usual probabilistic inverse problem can be solved using an optimization framework still providing full probabilistic results. Our mathematical model consists of seismic attributes as observed data, which are assumed to have been generated by the geological facies. The method infers the post-inversion (posterior) probability density of the facies plus some other unknown model parameters, from the seismic attributes and geological prior information. Most previous research in this domain is based on the localized likelihoods assumption, whereby the seismic attributes at a location are assumed to depend on the facies only at that location. Such an assumption is unrealistic because of imperfect seismic data acquisition and processing, and fundamental limitations of seismic imaging methods. In this paper, we relax this assumption: we allow probabilistic dependence between seismic attributes at a location and the facies in any neighbourhood of that location through a spatial filter. We term such likelihoods quasi-localized.
Variational Gaussian approximation for Poisson data
NASA Astrophysics Data System (ADS)
Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen
2018-02-01
The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.
Liesenjohann, Thilo; Neuhaus, Birger; Schmidt-Rhaesa, Andreas
2006-08-01
The anterior and posterior head sensory organs of Dactylopodola baltica (Macrodasyida, Gastrotricha) were investigated by transmission electron microscopy (TEM). In addition, whole individuals were labeled with phalloidin to mark F-actin and with anti-alpha-tubulin antibodies to mark microtubuli and studied with confocal laser scanning microscopy. Immunocytochemistry reveals that the large number of ciliary processes in the anterior head sensory organ contain F-actin; no signal could be detected for alpha-tubulin. Labeling with anti-alpha-tubulin antibodies revealed that the anterior and posterior head sensory organs are innervated by a common stem of nerves from the lateral nerve cords just anterior of the dorsal brain commissure. TEM studies showed that the anterior head sensory organ is composed of one sheath cell and one sensory cell with a single branching cilium that possesses a basal inflated part and regularly arranged ciliary processes. Each ciliary process contains one central microtubule. The posterior head sensory organ consists of at least one pigmented sheath cell and several probably monociliary sensory cells. Each cilium branches into irregularly arranged ciliary processes. These characters are assumed to belong to the ground pattern of the Gastrotricha. Copyright 2006 Wiley-Liss, Inc.
Park, C; Choi, J B; Lee, Y-S; Chang, H-S; Shin, C S; Kim, S; Han, D W
2015-04-01
Posterior neck pain following thyroidectomy is common because full neck extension is required during the procedure. We evaluated the effect of intra-operative transcutaneous electrical nerve stimulation on postoperative neck pain in patients undergoing total thyroidectomy under general anaesthesia. One hundred patients were randomly assigned to one of two groups; 50 patients received transcutaneous electrical nerve stimulation applied to the trapezius muscle and 50 patients acted as controls. Postoperative posterior neck pain and anterior wound pain were evaluated using an 11-point numerical rating scale at 30 min, 6 h, 24 h and 48 h following surgery. The numerical rating scale for posterior neck pain was significantly lower in the transcutaneous electrical nerve stimulation group compared with the control group at all time points (p < 0.05). There were no significant differences in the numerical rating scale for anterior wound pain at any time point. No adverse effects related to transcutaneous electrical nerve stimulation were observed. We conclude that intra-operative transcutaneous electrical nerve stimulation applied to the trapezius muscle reduced posterior neck pain following thyroidectomy. © 2014 The Association of Anaesthetists of Great Britain and Ireland.
Wu, Qiaofeng; Yeh, Alvin T
2008-02-01
To characterize the microstructural response of the rabbit cornea to changes in intraocular pressure (IOP) by using nonlinear optical microscopy (NLOM). Isolated rabbit corneas were mounted on an artificial anterior chamber in series with a manometer and were hydrostatically pressurized by a reservoir. The chamber was mounted on an upright microscope stage of a custom-built NLOM system for corneal imaging without using exogenous stains or dyes. Second harmonic generation in collagen was used to image through the full thickness of the central corneal stroma at IOPs between 5 and 20 mm Hg. Microstructural morphology changes as a function of IOP were used to characterize the depth-dependent response of the central cornea. Regional collagen lamellae architecture through the full thickness of the stroma was specifically imaged as a function of IOP. Hypotensive corneas showed gaps between lamellar structures that decreased in size with increasing IOP. These morphologic features appear to result from interwoven lamellae oriented along the anterior-posterior axis and parallel to the cornea surface. They appear throughout the full thickness and disappear with tension in the anterior but persist in the posterior central cornea, even at hypertensive IOP. NLOM reveals interwoven collagen lamellae sheets through the full thickness of the rabbit central cornea oriented along the anterior-posterior axis and parallel to the surface. The nondestructive nature of NLOM allows 3-dimensional imaging of stromal architecture as a function of IOP in situ. Collagen morphologic features were used as an indirect measure of depth-dependent mechanical response to changes in IOP.
RadVel: General toolkit for modeling Radial Velocities
NASA Astrophysics Data System (ADS)
Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan
2018-01-01
RadVel models Keplerian orbits in radial velocity (RV) time series. The code is written in Python with a fast Kepler's equation solver written in C. It provides a framework for fitting RVs using maximum a posteriori optimization and computing robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel can perform Bayesian model comparison and produces publication quality plots and LaTeX tables.
Computational statistics using the Bayesian Inference Engine
NASA Astrophysics Data System (ADS)
Weinberg, Martin D.
2013-09-01
This paper introduces the Bayesian Inference Engine (BIE), a general parallel, optimized software package for parameter inference and model selection. This package is motivated by the analysis needs of modern astronomical surveys and the need to organize and reuse expensive derived data. The BIE is the first platform for computational statistics designed explicitly to enable Bayesian update and model comparison for astronomical problems. Bayesian update is based on the representation of high-dimensional posterior distributions using metric-ball-tree based kernel density estimation. Among its algorithmic offerings, the BIE emphasizes hybrid tempered Markov chain Monte Carlo schemes that robustly sample multimodal posterior distributions in high-dimensional parameter spaces. Moreover, the BIE implements a full persistence or serialization system that stores the full byte-level image of the running inference and previously characterized posterior distributions for later use. Two new algorithms to compute the marginal likelihood from the posterior distribution, developed for and implemented in the BIE, enable model comparison for complex models and data sets. Finally, the BIE was designed to be a collaborative platform for applying Bayesian methodology to astronomy. It includes an extensible object-oriented and easily extended framework that implements every aspect of the Bayesian inference. By providing a variety of statistical algorithms for all phases of the inference problem, a scientist may explore a variety of approaches with a single model and data implementation. Additional technical details and download details are available from http://www.astro.umass.edu/bie. The BIE is distributed under the GNU General Public License.
Carbocyanine dye labeling reveals a new motor nucleus in octopus brain.
Robertson, J D; Schwartz, O M; Lee, P
1993-02-22
This work aims at a better understanding of the organization of the brain of Octopus vulgaris, emphasizing the touch and visual learning centers. We injected the carbocyanine dye, DiI, into the cerebrobrachial connectives and, separately, into the brachial nerves of living octopuses. In both experiments, retrogradely transported granules of DiI appeared in motor neurons in the superior buccal, posterior buccal and subvertical lobes and in a hitherto unsuspected motor nucleus of several hundred neurons in the posterior dorsal basal and median basal lobes. In addition we labeled afferent fibers by injecting DiI into the caudal (sensory) division of the cerebrobrachial connective on one side; the label spread throughout the superior buccal, posterior buccal and the lateral and median inferior frontal lobes mainly on the injected side. It extended through the cerebral tract into the subvertical lobe, into the superior frontal lobe through the interfrontal tract, through the posterior buccal commissure into the opposite posterior buccal lobe and into the median inferior frontal lobe. The work suggests a new function for the posterior dorsal and median basal lobes, which are shown for the first time to project through the inferior frontal lobe system into the brachial nerves. In addition it represents the first full report of the successful use of the carbocyanine dyes DiI and DiO for labeling nerve tissue in a live invertebrate animal.
Optimal Post-Operative Immobilisation for Supracondylar Humeral Fractures.
Azzolin, Lucas; Angelliaume, Audrey; Harper, Luke; Lalioui, Abdelfettah; Delgove, Anaïs; Lefèvre, Yan
2018-05-25
Supracondylar humeral fractures (SCHFs) are very common in paediatric patients. In France, percutaneous fixation with two lateral-entry pins is widely used after successful closed reduction. Post-operative immobilisation is typically with a long arm cast combined with a tubular-bandage sling that immobilises the shoulder and holds the arm in adduction and internal rotation to prevent external rotation of the shoulder, which might cause secondary displacement. The objective of this study was to compare this standard immobilisation technique to a posterior plaster splint with a simple sling. Secondary displacement is not more common with a posterior plaster splint and sling than with a long arm cast. 100 patients with extension Gartland type III SCHFs managed by closed reduction and percutaneous fixation with two lateral-entry pins between December 2011 and December 2015 were assessed retrospectively. Post-operative immobilisation was with a posterior plaster splint and a simple sling worn for 4 weeks. Radiographs were obtained on days 1, 45, and 90. Secondary displacement occurred in 8% of patients. No patient required revision surgery. The secondary displacement rate was comparable to earlier reports. Of the 8 secondary displacements, 5 were ascribable to technical errors. The remaining 3 were not caused by rotation of the arm and would probably not have been prevented by using the tubular-bandage sling. A posterior plaster splint combined with a simple sling is a simple and effective immobilisation method for SCHFs provided internal fixation is technically optimal. IV, retrospective observational study. Copyright © 2018. Published by Elsevier Masson SAS.
Al-Shayyab, Mohammad H
2017-01-01
The aim of this study was to evaluate the efficacy of, and patients' subjective responses to, periodontal ligament (PDL) anesthetic injection compared to traditional local-anesthetic infiltration injection for the nonsurgical extraction of one posterior maxillary permanent tooth. All patients scheduled for nonsurgical symmetrical maxillary posterior permanent tooth extraction in the Department of Oral and Maxillofacial Surgery at the University of Jordan Hospital, Amman, Jordan over a 7-month period were invited to participate in this prospective randomized double-blinded split-mouth study. Every patient received the recommended volume of 2% lidocaine with 1:100,000 epinephrine for PDL injection on the experimental side and for local infiltration on the control side. A visual analog scale (VAS) and verbal rating scale (VRS) were used to describe pain felt during injection and extraction, respectively. Statistical significance was based on probability values <0.05 and measured using χ 2 and Student t -tests and nonparametric Mann-Whitney and Kruskal-Wallis tests. Of the 73 patients eligible for this study, 55 met the inclusion criteria: 32 males and 23 females, with a mean age of 34.87±14.93 years. Differences in VAS scores and VRS data between the two techniques were statistically significant ( P <0.001) and in favor of the infiltration injection. The PDL injection may not be the alternative anesthetic technique of choice to routine local infiltration for the nonsurgical extraction of one posterior maxillary permanent tooth.
Sung, Ming-Hua; Deng, Ming-Chung; Chung, Yi-Hsuan; Huang, Yu-Liang; Chang, Chia-Yi; Lan, Yu-Ching; Chou, Hsin-Lin; Chao, Day-Yu
2015-12-01
Since 2010, a new variant of PEDV belonging to Genogroup 2 has been transmitting in China and further spreading to the Unites States and other Asian countries including Taiwan. In order to characterize in detail the temporal and geographic relationships among PEDV strains, the present study systematically evaluated the evolutionary patterns and phylogenetic resolution in each gene of the whole PEDV genome in order to determine which regions provided the maximal interpretative power. The result was further applied to identify the origin of PEDV that caused the 2014 epidemic in Taiwan. Thirty-four full genome sequences were downloaded from GenBank and divided into three non-mutually exclusive groups, namely, worldwide, Genogroup 2 and China, to cover different ranges of secular and spatial trends. Each dataset was then divided into different alignments by different genes for likelihood mapping and phylogenetic analysis. Our study suggested that both nsp3 and S genes contained the highest phylogenetic signal with substitution rate and phylogenetic topology similar to those obtained from the complete genome. Furthermore, the proportion of nodes with high posterior support (posterior probability >0.8) was similar between nsp3 and S genes. The nsp3 gene sequences from three clinical samples of swine with PEDV infections were aligned with other strains available from GenBank and the results suggested that the virus responsible for the 2014 PEDV outbreak in Taiwan clustered together with Clade I from the US within Genogroup 2. In conclusion, the current study identified the nsp3 gene as an alternative marker for a rapid and unequivocal classification of the circulating PEDV strains which provides complementary information to the S gene in identifying the emergence of epidemic strain resulting from recombination. Copyright © 2015 Elsevier B.V. All rights reserved.
Bayesian anomaly detection in monitoring data applying relevance vector machine
NASA Astrophysics Data System (ADS)
Saito, Tomoo
2011-04-01
A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.
Oh, Won Seok; Lee, Yong Seuk; Kim, Byung Kak; Sim, Jae Ang; Lee, Beom Koo
2016-06-01
To analyze the contact mechanics of the femoral component and polyethylene of the Low Contact Stress rotating platform (LCS-RP) in nonweight bearing and weight bearing conditions using full flexion lateral radiographs. From May 2009 to December 2013, 58 knees in 41 patients diagnosed with osteoarthritis and treated with total knee arthroplasty (TKA) were included in this study. TKA was performed using an LCS-RP knee prosthesis. Full flexion lateral radiographs in both weight bearing and nonweight bearing condition were taken at least one month postoperatively (average, 28.8 months). Translation of femoral component was determined by the contact point between the femoral component and polyethylene. Maximum flexion was measured as the angle between the lines drawn at the midpoint of the femur and tibia. Posterior shift of the contact point in LCS-RP TKA was observed under weight bearing condition, which resulted in deeper flexion compared to LCS-RP TKA under nonweight bearing condition. In the LCS-RP TKA, the contact point between the femoral component and polyethylene moved posteriorly under weight bearing condition, and the joint was more congruent and maximum flexion increased with weight bearing.
Bayesian parameter estimation for chiral effective field theory
NASA Astrophysics Data System (ADS)
Wesolowski, Sarah; Furnstahl, Richard; Phillips, Daniel; Klco, Natalie
2016-09-01
The low-energy constants (LECs) of a chiral effective field theory (EFT) interaction in the two-body sector are fit to observable data using a Bayesian parameter estimation framework. By using Bayesian prior probability distributions (pdfs), we quantify relevant physical expectations such as LEC naturalness and include them in the parameter estimation procedure. The final result is a posterior pdf for the LECs, which can be used to propagate uncertainty resulting from the fit to data to the final observable predictions. The posterior pdf also allows an empirical test of operator redundancy and other features of the potential. We compare results of our framework with other fitting procedures, interpreting the underlying assumptions in Bayesian probabilistic language. We also compare results from fitting all partial waves of the interaction simultaneously to cross section data compared to fitting to extracted phase shifts, appropriately accounting for correlations in the data. Supported in part by the NSF and DOE.
Multiclass feature selection for improved pediatric brain tumor segmentation
NASA Astrophysics Data System (ADS)
Ahmed, Shaheen; Iftekharuddin, Khan M.
2012-03-01
In our previous work, we showed that fractal-based texture features are effective in detection, segmentation and classification of posterior-fossa (PF) pediatric brain tumor in multimodality MRI. We exploited an information theoretic approach such as Kullback-Leibler Divergence (KLD) for feature selection and ranking different texture features. We further incorporated the feature selection technique with segmentation method such as Expectation Maximization (EM) for segmentation of tumor T and non tumor (NT) tissues. In this work, we extend the two class KLD technique to multiclass for effectively selecting the best features for brain tumor (T), cyst (C) and non tumor (NT). We further obtain segmentation robustness for each tissue types by computing Bay's posterior probabilities and corresponding number of pixels for each tissue segments in MRI patient images. We evaluate improved tumor segmentation robustness using different similarity metric for 5 patients in T1, T2 and FLAIR modalities.
Jedidi, H; Daury, N; Capa, R; Bahri, M A; Collette, F; Feyers, D; Bastin, C; Maquet, P; Salmon, E
2015-11-01
Capgras delusion is characterized by the misidentification of people and by the delusional belief that the misidentified persons have been replaced by impostors, generally perceived as persecutors. Since little is known regarding the neural correlates of Capgras syndrome, the cerebral metabolic pattern of a patient with probable Alzheimer's disease (AD) and Capgras syndrome was compared with those of 24-healthy elderly participants and 26 patients with AD without delusional syndrome. Comparing the healthy group with the AD group, the patient with AD had significant hypometabolism in frontal and posterior midline structures. In the light of current neural models of face perception, our patients with Capgras syndrome may be related to impaired recognition of a familiar face, subserved by the posterior cingulate/precuneus cortex, and impaired reflection about personally relevant knowledge related to a face, subserved by the dorsomedial prefrontal cortex. © The Author(s) 2013.
Using Latent Class Analysis to Model Temperament Types.
Loken, Eric
2004-10-01
Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks was used for model selection. Results show at least three types of infant temperament, with patterns consistent with those identified by previous researchers who classified the infants using a theoretically based system. Multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation. Latent class membership at four months of age predicted longitudinal outcomes at four years of age. The example illustrates issues relevant to all mixture models, including estimation, multi-modality, model selection, and comparisons based on the latent group indicators.
Modeling Dynamic Contrast-Enhanced MRI Data with a Constrained Local AIF.
Duan, Chong; Kallehauge, Jesper F; Pérez-Torres, Carlos J; Bretthorst, G Larry; Beeman, Scott C; Tanderup, Kari; Ackerman, Joseph J H; Garbow, Joel R
2018-02-01
This study aims to develop a constrained local arterial input function (cL-AIF) to improve quantitative analysis of dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) data by accounting for the contrast-agent bolus amplitude error in the voxel-specific AIF. Bayesian probability theory-based parameter estimation and model selection were used to compare tracer kinetic modeling employing either the measured remote-AIF (R-AIF, i.e., the traditional approach) or an inferred cL-AIF against both in silico DCE-MRI data and clinical, cervical cancer DCE-MRI data. When the data model included the cL-AIF, tracer kinetic parameters were correctly estimated from in silico data under contrast-to-noise conditions typical of clinical DCE-MRI experiments. Considering the clinical cervical cancer data, Bayesian model selection was performed for all tumor voxels of the 16 patients (35,602 voxels in total). Among those voxels, a tracer kinetic model that employed the voxel-specific cL-AIF was preferred (i.e., had a higher posterior probability) in 80 % of the voxels compared to the direct use of a single R-AIF. Maps of spatial variation in voxel-specific AIF bolus amplitude and arrival time for heterogeneous tissues, such as cervical cancer, are accessible with the cL-AIF approach. The cL-AIF method, which estimates unique local-AIF amplitude and arrival time for each voxel within the tissue of interest, provides better modeling of DCE-MRI data than the use of a single, measured R-AIF. The Bayesian-based data analysis described herein affords estimates of uncertainties for each model parameter, via posterior probability density functions, and voxel-wise comparison across methods/models, via model selection in data modeling.
Distribution of Marburg virus in Africa: An evolutionary approach.
Zehender, Gianguglielmo; Sorrentino, Chiara; Veo, Carla; Fiaschi, Lisa; Gioffrè, Sonia; Ebranati, Erika; Tanzi, Elisabetta; Ciccozzi, Massimo; Lai, Alessia; Galli, Massimo
2016-10-01
The aim of this study was to investigate the origin and geographical dispersion of Marburg virus, the first member of the Filoviridae family to be discovered. Seventy-three complete genome sequences of Marburg virus isolated from animals and humans were retrieved from public databases and analysed using a Bayesian phylogeographical framework. The phylogenetic tree of the Marburg virus data set showed two significant evolutionary lineages: Ravn virus (RAVV) and Marburg virus (MARV). MARV divided into two main clades; clade A included isolates from Uganda (five from the European epidemic in 1967), Kenya (1980) and Angola (from the epidemic of 2004-2005); clade B included most of the isolates obtained during the 1999-2000 epidemic in the Democratic Republic of the Congo (DRC) and a group of Ugandan isolates obtained in 2007-2009. The estimated mean evolutionary rate of the whole genome was 3.3×10(-4) substitutions/site/year (credibility interval 2.0-4.8). The MARV strain had a mean root time of the most recent common ancestor of 177.9years ago (YA) (95% highest posterior density 87-284), thus indicating that it probably originated in the mid-XIX century, whereas the RAVV strain had a later origin dating back to a mean 33.8 YA. The most probable location of the MARV ancestor was Uganda (state posterior probability, spp=0.41), whereas that of the RAVV ancestor was Kenya (spp=0.71). There were significant migration rates from Uganda to the DRC (Bayes Factor, BF=42.0) and in the opposite direction (BF=5.7). Our data suggest that Uganda may have been the cradle of Marburg virus in Africa. Copyright © 2016 Elsevier B.V. All rights reserved.
The evolutionary history of vertebrate cranial placodes II. Evolution of ectodermal patterning.
Schlosser, Gerhard; Patthey, Cedric; Shimeld, Sebastian M
2014-05-01
Cranial placodes are evolutionary innovations of vertebrates. However, they most likely evolved by redeployment, rewiring and diversification of preexisting cell types and patterning mechanisms. In the second part of this review we compare vertebrates with other animal groups to elucidate the evolutionary history of ectodermal patterning. We show that several transcription factors have ancient bilaterian roles in dorsoventral and anteroposterior regionalisation of the ectoderm. Evidence from amphioxus suggests that ancestral chordates then concentrated neurosecretory cells in the anteriormost non-neural ectoderm. This anterior proto-placodal domain subsequently gave rise to the oral siphon primordia in tunicates (with neurosecretory cells being lost) and anterior (adenohypophyseal, olfactory, and lens) placodes of vertebrates. Likewise, tunicate atrial siphon primordia and posterior (otic, lateral line, and epibranchial) placodes of vertebrates probably evolved from a posterior proto-placodal region in the tunicate-vertebrate ancestor. Since both siphon primordia in tunicates give rise to sparse populations of sensory cells, both proto-placodal domains probably also gave rise to some sensory receptors in the tunicate-vertebrate ancestor. However, proper cranial placodes, which give rise to high density arrays of specialised sensory receptors and neurons, evolved from these domains only in the vertebrate lineage. We propose that this may have involved rewiring of the regulatory network upstream and downstream of Six1/2 and Six4/5 transcription factors and their Eya family cofactors. These proteins, which play ancient roles in neuronal differentiation were first recruited to the dorsal non-neural ectoderm in the tunicate-vertebrate ancestor but subsequently probably acquired new target genes in the vertebrate lineage, allowing them to adopt new functions in regulating proliferation and patterning of neuronal progenitors. Copyright © 2014 Elsevier Inc. All rights reserved.
Zhang, Yi-Jun; Xu, Jian; Wang, Yue; Lin, Xiang-Jin; Ma, Xin
2015-02-01
The aim of this study was to explore the correlation between the kinematics of the hindfoot joint and the medial arch angle change in stage II posterior tibial tendon dysfunction flatfoot three-dimensionally under loading. Computed tomography (CT) scans of 12 healthy feet and 12 feet with stage II posterior tibial tendon dysfunction flatfoot were taken both in non- and full-body-weight-bearing condition. The CT images of the hindfoot bones were reconstructed into three-dimensional models with Mimics and Geomagic reverse engineering software. The three-dimensional changes of the hindfoot joint were calculated to determine their correlation to the medial longitudinal arch angle. The medial arch angle change was larger in stage II posterior tibial tendon dysfunction flatfoot compared to that in healthy foot under loading. The rotation and translation of the talocalcaneal joint, the talonavicular joint and the calcanocuboid joint had little influence on the change of the medial arch angle in healthy foot. However, the eversion of the talocalcaneal joint, the proximal translation of the calcaneus relative to the talus and the dorsiflexion of talonavicular joint could increase the medial arch angle in stage II posterior tibial tendon dysfunction flatfoot under loading. Joint instability occurred in patients with stage II posterior tibial tendon dysfunction flatfoot under loading. Limitation of over movement of the talocalcaneal joint and the talonavicular joint may help correct the medial longitudinal arch in stage II posterior tibial tendon dysfunction flatfoot. Copyright © 2014 Elsevier Ltd. All rights reserved.
Soon, En Loong; Razak, Hamid Rahmatullah Bin Abd; Tan, Andrew Hwee Chye
2017-01-01
Introduction: Massive rotator cuff tears (RCTs) in the context of shoulder dislocations are relatively uncommon in the young adult (<40 years) and if reported are more commonly described in association with acute traumatic anterior glenohumeral dislocations. They have rarely been described with posterior dislocations, regardless of patient age. This is the 1st case reported in the context of posterior dislocations, where a triad of biceps tendon rupture, posterior dislocation, and RCTs was observed during surgery. It provides an important reminder to readers about certain injuries commonly overlooked during the assessment of an acute traumatic shoulder. Case Report: We report an atypical case of a massive RCT involving a 34-year-old Asian male who landed on his outstretched hand after falling off a bicycle. A tear involving the supraspinatus and subscapularis was visualized during surgery, along with long head of biceps (LHB) tendon rupture. This was after an initial failure to achieve closed reduction of the posteriorly dislocated left shoulder. Conclusion: It is easy to miss the posterior instability, the associated RCTs or the biceps tendon injuries. Biceps tendon rupture should be a consideration when one is unable to reduce a posteriorly dislocated shoulder. The interposed torn LHB tendon trapped within the glenohumeral joint was the likely physical block in the initial failure to achieve closed reduction. With timely diagnosis, prudent physical examination, early imaging and surgery, and excellent results can potentially be achieved to return a young patient to full functionality. PMID:28819610
Ghosh, Sujit K
2010-01-01
Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.
Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.
Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash
2014-03-01
One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media
NASA Astrophysics Data System (ADS)
Fichtner, A.; Simutė, S.
2017-12-01
We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sigeti, David E.; Pelak, Robert A.
We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis withmore » an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a general beta-function prior for {theta}, enabling sequential analysis in which a small number of new simulations may be done and the resulting posterior for {theta} used as a prior to inform the next stage of power analysis.« less
Lovelock, Paul K; Spurdle, Amanda B; Mok, Myth T S; Farrugia, Daniel J; Lakhani, Sunil R; Healey, Sue; Arnold, Stephen; Buchanan, Daniel; Couch, Fergus J; Henderson, Beric R; Goldgar, David E; Tavtigian, Sean V; Chenevix-Trench, Georgia; Brown, Melissa A
2007-01-01
Many of the DNA sequence variants identified in the breast cancer susceptibility gene BRCA1 remain unclassified in terms of their potential pathogenicity. Both multifactorial likelihood analysis and functional approaches have been proposed as a means to elucidate likely clinical significance of such variants, but analysis of the comparative value of these methods for classifying all sequence variants has been limited. We have compared the results from multifactorial likelihood analysis with those from several functional analyses for the four BRCA1 sequence variants A1708E, G1738R, R1699Q, and A1708V. Our results show that multifactorial likelihood analysis, which incorporates sequence conservation, co-inheritance, segregation, and tumour immunohistochemical analysis, may improve classification of variants. For A1708E, previously shown to be functionally compromised, analysis of oestrogen receptor, cytokeratin 5/6, and cytokeratin 14 tumour expression data significantly strengthened the prediction of pathogenicity, giving a posterior probability of pathogenicity of 99%. For G1738R, shown to be functionally defective in this study, immunohistochemistry analysis confirmed previous findings of inconsistent 'BRCA1-like' phenotypes for the two tumours studied, and the posterior probability for this variant was 96%. The posterior probabilities of R1699Q and A1708V were 54% and 69%, respectively, only moderately suggestive of increased risk. Interestingly, results from functional analyses suggest that both of these variants have only partial functional activity. R1699Q was defective in foci formation in response to DNA damage and displayed intermediate transcriptional transactivation activity but showed no evidence for centrosome amplification. In contrast, A1708V displayed an intermediate transcriptional transactivation activity and a normal foci formation response in response to DNA damage but induced centrosome amplification. These data highlight the need for a range of functional studies to be performed in order to identify variants with partially compromised function. The results also raise the possibility that A1708V and R1699Q may be associated with a low or moderate risk of cancer. While data pooling strategies may provide more information for multifactorial analysis to improve the interpretation of the clinical significance of these variants, it is likely that the development of current multifactorial likelihood approaches and the consideration of alternative statistical approaches will be needed to determine whether these individually rare variants do confer a low or moderate risk of breast cancer.
Shah, P L; Slebos, D-J; Cardoso, P F G; Cetti, E; Voelker, K; Levine, B; Russell, M E; Goldin, J; Brown, M; Cooper, J D; Sybrecht, G W
2011-09-10
Airway bypass is a bronchoscopic lung-volume reduction procedure for emphysema whereby transbronchial passages into the lung are created to release trapped air, supported with paclitaxel-coated stents to ease the mechanics of breathing. The aim of the EASE (Exhale airway stents for emphysema) trial was to evaluate safety and efficacy of airway bypass in people with severe homogeneous emphysema. We undertook a randomised, double-blind, sham-controlled study in 38 specialist respiratory centres worldwide. We recruited 315 patients who had severe hyperinflation (ratio of residual volume [RV] to total lung capacity of ≥0·65). By computer using a random number generator, we randomly allocated participants (in a 2:1 ratio) to either airway bypass (n=208) or sham control (107). We divided investigators into team A (masked), who completed pre-procedure and post-procedure assessments, and team B (unmasked), who only did bronchoscopies without further interaction with patients. Participants were followed up for 12 months. The 6-month co-primary efficacy endpoint required 12% or greater improvement in forced vital capacity (FVC) and 1 point or greater decrease in the modified Medical Research Council dyspnoea score from baseline. The composite primary safety endpoint incorporated five severe adverse events. We did Bayesian analysis to show the posterior probability that airway bypass was superior to sham control (success threshold, 0·965). Analysis was by intention to treat. This study is registered with ClinicalTrials.gov, number NCT00391612. All recruited patients were included in the analysis. At 6 months, no difference between treatment arms was noted with respect to the co-primary efficacy endpoint (30 of 208 for airway bypass vs 12 of 107 for sham control; posterior probability 0·749, below the Bayesian success threshold of 0·965). The 6-month composite primary safety endpoint was 14·4% (30 of 208) for airway bypass versus 11·2% (12 of 107) for sham control (judged non-inferior, with a posterior probability of 1·00 [Bayesian success threshold >0·95]). Although our findings showed safety and transient improvements, no sustainable benefit was recorded with airway bypass in patients with severe homogeneous emphysema. Broncus Technologies. Copyright © 2011 Elsevier Ltd. All rights reserved.
Value of Information spreadsheet
Trainor-Guitton, Whitney
2014-05-12
This spreadsheet represents the information posteriors derived from synthetic data of magnetotellurics (MT). These were used to calculate value of information of MT for geothermal exploration. Information posteriors describe how well MT was able to locate the "throat" of clay caps, which are indicative of hidden geothermal resources. This data is full explained in the peer-reviewed publication: Trainor-Guitton, W., Hoversten, G. M., Ramirez, A., Roberts, J., Júlíusson, E., Key, K., Mellors, R. (Sept-Oct. 2014) The value of spatial information for determining well placement: a geothermal example, Geophysics.
Objective estimates based on experimental data and initial and final knowledge
NASA Technical Reports Server (NTRS)
Rosenbaum, B. M.
1972-01-01
An extension of the method of Jaynes, whereby least biased probability estimates are obtained, permits such estimates to be made which account for experimental data on hand as well as prior and posterior knowledge. These estimates can be made for both discrete and continuous sample spaces. The method allows a simple interpretation of Laplace's two rules: the principle of insufficient reason and the rule of succession. Several examples are analyzed by way of illustration.
[Convergence nystagmus and vertical gaze palsy of vascular origin].
Jouvent, E; Benisty, S; Fenelon, G; Créange, A; Pierrot-Deseilligny, C
2005-05-01
A case of convergence-retraction nystagmus with upward vertical gaze paralysis and skew deviation (right hypotropia), without any other neurological signs, is reported. The probably vascular lesion was located at the mesodiencephalic junction, lying between the right border of the posterior commissure, the right interstitial nucleus of Cajal and the periaqueductal grey matter, accounting for the three ocular motor signs. The particular interest of this case is due to the relative smallness of the lesion.
Unified Description of Scattering and Propagation FY15 Annual Report
2015-09-30
the Texas coast. For both cases a conditional posterior probability distribution ( PPD ) is formed for a parameter space that includes both geoacoustic...for this second application of ME. For each application of ME it is important to note that a new likelihood function and thus PPD is computed. One...the 50-700 Hz band. These data offered a means by which the results of using the ship radiated noise could be partially validated. The conditional PPD
Levy, Jonathan I.; Diez, David; Dou, Yiping; Barr, Christopher D.; Dominici, Francesca
2012-01-01
Health risk assessments of particulate matter less than 2.5 μm in diameter (PM2.5) often assume that all constituents of PM2.5 are equally toxic. While investigators in previous epidemiologic studies have evaluated health risks from various PM2.5 constituents, few have conducted the analyses needed to directly inform risk assessments. In this study, the authors performed a literature review and conducted a multisite time-series analysis of hospital admissions and exposure to PM2.5 constituents (elemental carbon, organic carbon matter, sulfate, and nitrate) in a population of 12 million US Medicare enrollees for the period 2000–2008. The literature review illustrated a general lack of multiconstituent models or insight about probabilities of differential impacts per unit of concentration change. Consistent with previous results, the multisite time-series analysis found statistically significant associations between short-term changes in elemental carbon and cardiovascular hospital admissions. Posterior probabilities from multiconstituent models provided evidence that some individual constituents were more toxic than others, and posterior parameter estimates coupled with correlations among these estimates provided necessary information for risk assessment. Ratios of constituent toxicities, commonly used in risk assessment to describe differential toxicity, were extremely uncertain for all comparisons. These analyses emphasize the subtlety of the statistical techniques and epidemiologic studies necessary to inform risk assessments of particle constituents. PMID:22510275
Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence
NASA Astrophysics Data System (ADS)
Lewis, Nicholas; Grünwald, Peter
2018-03-01
Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).
NASA Astrophysics Data System (ADS)
Hanish Nithin, Anu; Omenzetter, Piotr
2017-04-01
Optimization of the life-cycle costs and reliability of offshore wind turbines (OWTs) is an area of immense interest due to the widespread increase in wind power generation across the world. Most of the existing studies have used structural reliability and the Bayesian pre-posterior analysis for optimization. This paper proposes an extension to the previous approaches in a framework for probabilistic optimization of the total life-cycle costs and reliability of OWTs by combining the elements of structural reliability/risk analysis (SRA), the Bayesian pre-posterior analysis with optimization through a genetic algorithm (GA). The SRA techniques are adopted to compute the probabilities of damage occurrence and failure associated with the deterioration model. The probabilities are used in the decision tree and are updated using the Bayesian analysis. The output of this framework would determine the optimal structural health monitoring and maintenance schedules to be implemented during the life span of OWTs while maintaining a trade-off between the life-cycle costs and risk of the structural failure. Numerical illustrations with a generic deterioration model for one monitoring exercise in the life cycle of a system are demonstrated. Two case scenarios, namely to build initially an expensive and robust or a cheaper but more quickly deteriorating structures and to adopt expensive monitoring system, are presented to aid in the decision-making process.
Efficiency of nuclear and mitochondrial markers recovering and supporting known amniote groups.
Lambret-Frotté, Julia; Perini, Fernando Araújo; de Moraes Russo, Claudia Augusta
2012-01-01
We have analysed the efficiency of all mitochondrial protein coding genes and six nuclear markers (Adora3, Adrb2, Bdnf, Irbp, Rag2 and Vwf) in reconstructing and statistically supporting known amniote groups (murines, rodents, primates, eutherians, metatherians, therians). The efficiencies of maximum likelihood, Bayesian inference, maximum parsimony, neighbor-joining and UPGMA were also evaluated, by assessing the number of correct and incorrect recovered groupings. In addition, we have compared support values using the conservative bootstrap test and the Bayesian posterior probabilities. First, no correlation was observed between gene size and marker efficiency in recovering or supporting correct nodes. As expected, tree-building methods performed similarly, even UPGMA that, in some cases, outperformed other most extensively used methods. Bayesian posterior probabilities tend to show much higher support values than the conservative bootstrap test, for correct and incorrect nodes. Our results also suggest that nuclear markers do not necessarily show a better performance than mitochondrial genes. The so-called dependency among mitochondrial markers was not observed comparing genome performances. Finally, the amniote groups with lowest recovery rates were therians and rodents, despite the morphological support for their monophyletic status. We suggest that, regardless of the tree-building method, a few carefully selected genes are able to unfold a detailed and robust scenario of phylogenetic hypotheses, particularly if taxon sampling is increased.
Ocriplasmin: who is the best candidate?
Prospero Ponce, Claudia M; Stevenson, William; Gelman, Rachel; Agarwal, Daniel R; Christoforidis, John B
2016-01-01
Enzymatic vitreolysis is currently the focus of attention around the world for treating vitreomacular traction and full-thickness macular hole. Induction of posterior vitreous detachment is an active area of developmental clinical and basic research. Despite exerting an incompletely elucidated physiological effect, ocriplasmin (also known as microplasmin) has been recognized to serve as a well-tolerated intravitreal injection for the treatment of vitreomacular traction and full-thickness macular hole. There are several unexplored areas of intervention where enzymatic vitreolysis could potentially be used (ie, diabetic macular edema). Recent promising studies have included combinations of enzymatic approaches and new synthetic molecules that induce complete posterior vitreous detachment as well as antiangiogenesis. Although no guidelines have been proposed for the use of ocriplasmin, this review attempts to aid physicians in answering the most important question, “Who is the best candidate?” PMID:27051270
Sugar, Elizabeth A; Holbrook, Janet T; Kempen, John H; Burke, Alyce E; Drye, Lea T; Thorne, Jennifer E; Louis, Thomas A; Jabs, Douglas A; Altaweel, Michael M; Frick, Kevin D
2014-10-01
To evaluate the 3-year incremental cost-effectiveness of fluocinolone acetonide implant versus systemic therapy for the treatment of noninfectious intermediate, posterior, and panuveitis. Randomized, controlled, clinical trial. Patients with active or recently active intermediate, posterior, or panuveitis enrolled in the Multicenter Uveitis Steroid Treatment Trial. Data on cost and health utility during 3 years after randomization were evaluated at 6-month intervals. Analyses were stratified by disease laterality at randomization (31 unilateral vs 224 bilateral) because of the large upfront cost of the implant. The primary outcome was the incremental cost-effectiveness ratio (ICER) over 3 years: the ratio of the difference in cost (in United States dollars) to the difference in quality-adjusted life-years (QALYs). Costs of medications, surgeries, hospitalizations, and regular procedures (e.g., laboratory monitoring for systemic therapy) were included. We computed QALYs as a weighted average of EQ-5D scores over 3 years of follow-up. The ICER at 3 years was $297,800/QALY for bilateral disease, driven by the high cost of implant therapy (difference implant - systemic [Δ]: $16,900; P < 0.001) and the modest gains in QALYs (Δ = 0.057; P = 0.22). The probability of the ICER being cost-effective at thresholds of $50,000/QALY and $100,000/QALY was 0.003 and 0.04, respectively. The ICER for unilateral disease was more favorable, namely, $41,200/QALY at 3 years, because of a smaller difference in cost between the 2 therapies (Δ = $5300; P = 0.44) and a larger benefit in QALYs with the implant (Δ = 0.130; P = 0.12). The probability of the ICER being cost-effective at thresholds of $50,000/QALY and $100,000/QALY was 0.53 and 0.74, respectively. Fluocinolone acetonide implant therapy was reasonably cost-effective compared with systemic therapy for individuals with unilateral intermediate, posterior, or panuveitis but not for those with bilateral disease. These results do not apply to the use of implant therapy when systemic therapy has failed or is contraindicated. Should the duration of implant effect prove to be substantially >3 years or should large changes in therapy pricing occur, the cost-effectiveness of implant versus systemic therapy would need to be reevaluated. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Gwak, Heui-Chul; Kim, Chang-Wan; Kim, Jung-Han; Choo, Hye-Jeung; Sagong, Seung-Yeob; Shin, John
2015-05-01
The purpose of this study was to evaluate the extension of delamination and the cuff integrity after arthroscopic repair of delaminated rotator cuff tears. Sixty-five patients with delaminated rotator cuff tears were retrospectively reviewed. The delaminated tears were divided into full-thickness delaminated tears and partial-thickness delaminated tears. To evaluate the medial extension, we calculated the coronal size of the delaminated portion. To evaluate the posterior extension, we checked the tendon involved. Cuff integrity was evaluated by computed tomography arthrography. The mean medial extension in the full-thickness and partial-thickness delaminated tears was 18.1 ± 6.0 mm and 22.7 ± 6.3 mm, respectively (P = .0084). The posterior extension into the supraspinatus and the infraspinatus was 36.9% and 32.3%, respectively, in the full-thickness delaminated tears, and it was 27.7% and 3.1%, respectively, in the partial-thickness delaminated tears (P = .0043). With regard to cuff integrity, 35 cases of anatomic healing, 10 cases of partial healing defects, and 17 cases of retear were detected. Among the patients with retear and partial healing of the defect, all the partially healed defects showed delamination. Three retear patients showed delamination, and 14 retear patients did not show delamination; the difference was statistically significant (P = .0001). The full-thickness delaminated tears showed less medial extension and more posterior extension than the partial-thickness delaminated tears. Delamination did not develop in retear patients, but delamination was common in the patients with partially healed defects. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Palmer, Ty B; Hawkey, Matt J; Smith, Doug B; Thompson, Brennan J
2014-05-01
The purpose of this study was to examine the effectiveness of maximal and rapid isometric torque characteristics of the posterior muscles of the hip and thigh and lower-body power to discriminate between professional status in full-time and part-time professional soccer referees. Seven full-time (mean ± SE: age = 36 ± 2 years; mass = 82 ± 4 kg; and height = 179 ± 3 cm) and 9 part-time (age = 34 ± 2 years; mass = 84 ± 2 kg; and height = 181 ± 2 cm) professional soccer referees performed 2 isometric maximal voluntary contractions (MVCs) of the posterior muscles of the hip and thigh. Peak torque (PT) and absolute and relative rate of torque development (RTD) were calculated from a torque-time curve that was recorded during each MVC. Lower-body power output was assessed through a vertical jump test. Results indicated that the rapid torque characteristics were greater in the full-time compared with the part-time referees for absolute RTD (p = 0.011) and relative RTD at 1/2 (p = 0.022) and 2/3 (p = 0.033) of the normalized torque-time curve. However, no differences were observed for PT (p = 0.660) or peak power (Pmax, p = 0.149) between groups. These findings suggest that rapid torque characteristics of the posterior muscles of the hip and thigh may be sensitive and effective measures for discriminating between full-time and part-time professional soccer referees. Strength and conditioning coaches may use these findings to help identify professional soccer referees with high explosive strength-related capacities and possibly overall refereeing ability.
Variations on Bayesian Prediction and Inference
2016-05-09
inference 2.2.1 Background There are a number of statistical inference problems that are not generally formulated via a full probability model...problem of inference about an unknown parameter, the Bayesian approach requires a full probability 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...the problem of inference about an unknown parameter, the Bayesian approach requires a full probability model/likelihood which can be an obstacle
Toward accurate and precise estimates of lion density.
Elliot, Nicholas B; Gopalaswamy, Arjun M
2017-08-01
Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and policy decisions. © 2016 Society for Conservation Biology.
Selecting Summary Statistics in Approximate Bayesian Computation for Calibrating Stochastic Models
Burr, Tom
2013-01-01
Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the “go-to” option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example. PMID:24288668
Selecting summary statistics in approximate Bayesian computation for calibrating stochastic models.
Burr, Tom; Skurikhin, Alexei
2013-01-01
Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the "go-to" option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example.
Shetty, Gautam M; Mullaji, Arun; Bhayde, Sagar
2012-10-01
This prospective study aimed to evaluate radiographically, change in joint line and femoral condylar offset with the optimized gap balancing technique in computer-assisted, primary, cruciate-substituting total knee arthroplasties (TKAs). One hundred and twenty-nine consecutive computer-assisted TKAs were evaluated radiographically using pre- and postoperative full-length standing hip-to-ankle, antero-posterior and lateral radiographs to assess change in knee deformity, joint line height and posterior condylar offset. In 49% of knees, there was a net decrease (mean 2.2mm, range 0.2-8.4mm) in joint line height postoperatively whereas 46.5% of knees had a net increase in joint line height (mean 2.5mm, range 0.2-11.2mm). In 93% of the knees, joint line was restored to within ± 5 mm of preoperative values. In 53% of knees, there was a net increase (mean 2.9 mm, range 0.2-12 mm) in posterior offset postoperatively whereas 40% of knees had a net decrease in posterior offset (mean 4.2mm, range 0.6-20mm). In 82% of knees, the posterior offset was restored within ± 5 mm of preoperative values. Based on radiographic evaluation in extension and at 30° flexion, the current study clearly demonstrates that joint line and posterior femoral condylar offset can be restored in the majority of computer-assisted, cruciate-substituting TKAs to within 5mm of their preoperative value. The optimized gap balancing feature of the computer software allows the surgeon to simulate the effect of simultaneously adjusting femoral component size, position and distal femoral resection level on joint line and posterior femoral offset. Copyright © 2011 Elsevier B.V. All rights reserved.
Kondo, Etsuko
2007-01-01
The treatment of an adult patient with a skeletal Class II Division 1 malocclusion, retrognathic mandible with downward and backward rotation, anterior open bite, and temporomandibular disorders is presented. Treatment objectives included establishing a stable occlusion with normal respiration, eliminating temporomandibular disorder symptoms, and improving facial esthetics through nonextraction and nonsurgical treatment. The patient was a Japanese adult female, who had previously been advised to have orthognathic surgery. An expansion plate was used to reshape the maxillary dentoalveolar arch. Distalization of the maxillary arch and forward movement of the mandible were achieved by reduced excessive posterior occlusal vertical dimension, through uprighting and intruding the mandibular posterior teeth, and rotating the mandible slightly upward and forward. The functional occlusal plane was reconstructed by uprighting and intruding the mandibular posterior teeth with a full-bracket appliance, combined with a maxillary expansion plate and short Class II elastics. Myofunctional therapy and masticatory and cervical muscle training involved chewing gum exercises and neck-muscle massage. The excessive posterior vertical occlusal dimension was significantly reduced, creating a small clearance between the posterior maxilla and mandible. The occlusal interferences in the posterior area were eliminated by the expansion of the maxillary dentoalveolar arch. As a result, the mandible moved forward, creating a more favorable jaw relationship. Distal movement of the maxillary arch was also achieved. The functional occlusal plane was reconstructed and a normal overjet and overbite were created. Adequate tongue space for normal respiration was established during the early stage of treatment. A stable occlusion with adequate posterior support and anterior guidance was established in a treatment time of 25 months, without orthognathic surgery, extraction, or headgear; this result was maintained at more than 1 year 8 months posttreatment.
Mean Field Variational Bayesian Data Assimilation
NASA Astrophysics Data System (ADS)
Vrettas, M.; Cornford, D.; Opper, M.
2012-04-01
Current data assimilation schemes propose a range of approximate solutions to the classical data assimilation problem, particularly state estimation. Broadly there are three main active research areas: ensemble Kalman filter methods which rely on statistical linearization of the model evolution equations, particle filters which provide a discrete point representation of the posterior filtering or smoothing distribution and 4DVAR methods which seek the most likely posterior smoothing solution. In this paper we present a recent extension to our variational Bayesian algorithm which seeks the most probably posterior distribution over the states, within the family of non-stationary Gaussian processes. Our original work on variational Bayesian approaches to data assimilation sought the best approximating time varying Gaussian process to the posterior smoothing distribution for stochastic dynamical systems. This approach was based on minimising the Kullback-Leibler divergence between the true posterior over paths, and our Gaussian process approximation. So long as the observation density was sufficiently high to bring the posterior smoothing density close to Gaussian the algorithm proved very effective, on lower dimensional systems. However for higher dimensional systems, the algorithm was computationally very demanding. We have been developing a mean field version of the algorithm which treats the state variables at a given time as being independent in the posterior approximation, but still accounts for their relationships between each other in the mean solution arising from the original dynamical system. In this work we present the new mean field variational Bayesian approach, illustrating its performance on a range of classical data assimilation problems. We discuss the potential and limitations of the new approach. We emphasise that the variational Bayesian approach we adopt, in contrast to other variational approaches, provides a bound on the marginal likelihood of the observations given parameters in the model which also allows inference of parameters such as observation errors, and parameters in the model and model error representation, particularly if this is written as a deterministic form with small additive noise. We stress that our approach can address very long time window and weak constraint settings. However like traditional variational approaches our Bayesian variational method has the benefit of being posed as an optimisation problem. We finish with a sketch of the future directions for our approach.
Inference of emission rates from multiple sources using Bayesian probability theory.
Yee, Eugene; Flesch, Thomas K
2010-03-01
The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Effective Online Bayesian Phylogenetics via Sequential Monte Carlo with Guided Proposals
Fourment, Mathieu; Claywell, Brian C; Dinh, Vu; McCoy, Connor; Matsen IV, Frederick A; Darling, Aaron E
2018-01-01
Abstract Modern infectious disease outbreak surveillance produces continuous streams of sequence data which require phylogenetic analysis as data arrives. Current software packages for Bayesian phylogenetic inference are unable to quickly incorporate new sequences as they become available, making them less useful for dynamically unfolding evolutionary stories. This limitation can be addressed by applying a class of Bayesian statistical inference algorithms called sequential Monte Carlo (SMC) to conduct online inference, wherein new data can be continuously incorporated to update the estimate of the posterior probability distribution. In this article, we describe and evaluate several different online phylogenetic sequential Monte Carlo (OPSMC) algorithms. We show that proposing new phylogenies with a density similar to the Bayesian prior suffers from poor performance, and we develop “guided” proposals that better match the proposal density to the posterior. Furthermore, we show that the simplest guided proposals can exhibit pathological behavior in some situations, leading to poor results, and that the situation can be resolved by heating the proposal density. The results demonstrate that relative to the widely used MCMC-based algorithm implemented in MrBayes, the total time required to compute a series of phylogenetic posteriors as sequences arrive can be significantly reduced by the use of OPSMC, without incurring a significant loss in accuracy. PMID:29186587
Strauß, Johannes; Riesterer, Anja S; Lakes-Harlan, Reinhard
2016-01-01
The subgenual organ and associated scolopidial organs are well studied in Orthoptera and related taxa. In some insects, a small accessory organ or Nebenorgan is described posterior to the subgenual organ. In Tettigoniidae (Ensifera), the accessory organ has only been noted in one species though tibial sensory organs are well studied for neuroanatomy and physiology. Here, we use axonal tracing to analyse the posterior subgenual organ innervated by the main motor nerve. Investigating seven species from different groups of Tettigoniidae, we describe a small group of scolopidial sensilla (5-9 sensory neurons) which has features characteristic of the accessory organ: posterior tibial position, innervation by the main leg nerve rather than by the tympanal nerve, orientation of dendrites in proximal or ventro-proximal direction in the leg, and commonly association with a single campaniform sensillum. The neuroanatomy is highly similar between leg pairs. We show differences in the innervation in two species of the genus Poecilimon as compared to the other species. In Poecilimon, the sensilla of the accessory organ are innervated by one nerve branch together with the subgenual organ. The results suggest that the accessory organ is part of the sensory bauplan in the leg of Tettigoniidae and probably Ensifera. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Potential fields on the ventricular surface of the exposed dog heart during normal excitation.
Arisi, G; Macchi, E; Baruffi, S; Spaggiari, S; Taccardi, B
1983-06-01
We studied the normal spread of excitation on the anterior and posterior ventricular surface of open-chest dogs by recording unipolar electrograms from an array of 1124 electrodes spaced 2 mm apart. The array had the shape of the ventricular surface of the heart. The electrograms were processed by a computer and displayed as epicardial equipotential maps at 1-msec intervals. Isochrone maps also were drawn. Several new features of epicardial potential fields were identified: (1) a high number of breakthrough points; (2) the topography, apparent widths, velocities of the wavefronts and the related potential drop; (3) the topography of positive potential peaks in relation to the wavefronts. Fifteen to 24 breakthrough points were located on the anterior, and 10 to 13 on the posterior ventricular surface. Some were in previously described locations and many others in new locations. Specifically, 3 to 5 breakthrough points appeared close to the atrioventricular groove on the anterior right ventricle and 2 to 4 on the posterior heart aspect; these basal breakthrough points appeared when a large portion of ventricular surface was still unexcited. Due to the presence of numerous breakthrough points on the anterior and posterior aspect of the heart which had not previously been described, the spread of excitation on the ventricular surface was "mosaic-like," with activation wavefronts spreading in all directions, rather than radially from the two breakthrough points, as traditionally described. The positive potential peaks which lay ahead of the expanding wavefronts moved along preferential directions which were probably related to the myocardial fiber direction.
Posterior dental size reduction in hominids: the Atapuerca evidence.
Bermúdez de Castro, J M; Nicolas, M E
1995-04-01
In order to reassess previous hypotheses concerning dental size reduction of the posterior teeth during Pleistocene human evolution, current fossil dental evidence is examined. This evidence includes the large sample of hominid teeth found in recent excavations (1984-1993) in the Sima de los Huesos Middle Pleistocene cave site of the Sierra de Atapuerca (Burgos, Spain). The lower fourth premolars and molars of the Atapuerca hominids, probably older than 300 Kyr, have dimensions similar to those of modern humans. Further, these hominids share the derived state of other features of the posterior teeth with modern humans, such as a similar relative molar size and frequent absence of the hypoconulid, thus suggesting a possible case of parallelism. We believe that dietary changes allowed size reduction of the posterior teeth during the Middle Pleistocene, and the present evidence suggests that the selective pressures that operated on the size variability of these teeth were less restrictive than what is assumed by previous models of dental reduction. Thus, the causal relationship between tooth size decrease and changes in food-preparation techniques during the Pleistocene should be reconsidered. Moreover, the present evidence indicates that the differential reduction of the molars cannot be explained in terms of restriction of available growth space. The molar crown area measurements of a modern human sample were also investigated. The results of this study, as well as previous similar analyses, suggest that a decrease of the rate of cell proliferation, which affected the later-forming crown regions to a greater extent, may be the biological process responsible for the general and differential dental size reduction that occurred during human evolution.
The Cramér-Rao Bounds and Sensor Selection for Nonlinear Systems with Uncertain Observations.
Wang, Zhiguo; Shen, Xiaojing; Wang, Ping; Zhu, Yunmin
2018-04-05
This paper considers the problems of the posterior Cramér-Rao bound and sensor selection for multi-sensor nonlinear systems with uncertain observations. In order to effectively overcome the difficulties caused by uncertainty, we investigate two methods to derive the posterior Cramér-Rao bound. The first method is based on the recursive formula of the Cramér-Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state. The computation burden of this method is relatively high, especially in large sensor networks. Inspired by the idea of the expectation maximization algorithm, the second method is to introduce some 0-1 latent variables to deal with the Gaussian mixture model. Since the regular condition of the posterior Cramér-Rao bound is unsatisfied for the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables. Then, a new Cramér-Rao bound can be achieved by a limiting process of the Cramér-Rao bound of the continuous system. It avoids the complex integral, which can reduce the computation burden. Based on the new posterior Cramér-Rao bound, the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection of a large-scale sensor networks. Two typical numerical examples verify the effectiveness of the proposed methods.
Al-Shayyab, Mohammad H
2017-01-01
Aim The aim of this study was to evaluate the efficacy of, and patients’ subjective responses to, periodontal ligament (PDL) anesthetic injection compared to traditional local-anesthetic infiltration injection for the nonsurgical extraction of one posterior maxillary permanent tooth. Materials and methods All patients scheduled for nonsurgical symmetrical maxillary posterior permanent tooth extraction in the Department of Oral and Maxillofacial Surgery at the University of Jordan Hospital, Amman, Jordan over a 7-month period were invited to participate in this prospective randomized double-blinded split-mouth study. Every patient received the recommended volume of 2% lidocaine with 1:100,000 epinephrine for PDL injection on the experimental side and for local infiltration on the control side. A visual analog scale (VAS) and verbal rating scale (VRS) were used to describe pain felt during injection and extraction, respectively. Statistical significance was based on probability values <0.05 and measured using χ2 and Student t-tests and nonparametric Mann–Whitney and Kruskal–Wallis tests. Results Of the 73 patients eligible for this study, 55 met the inclusion criteria: 32 males and 23 females, with a mean age of 34.87±14.93 years. Differences in VAS scores and VRS data between the two techniques were statistically significant (P<0.001) and in favor of the infiltration injection. Conclusion The PDL injection may not be the alternative anesthetic technique of choice to routine local infiltration for the nonsurgical extraction of one posterior maxillary permanent tooth. PMID:29070950
Joint search and sensor management for geosynchronous satellites
NASA Astrophysics Data System (ADS)
Zatezalo, A.; El-Fallah, A.; Mahler, R.; Mehra, R. K.; Pham, K.
2008-04-01
Joint search and sensor management for space situational awareness presents daunting scientific and practical challenges as it requires a simultaneous search for new, and the catalog update of the current space objects. We demonstrate a new approach to joint search and sensor management by utilizing the Posterior Expected Number of Targets (PENT) as the objective function, an observation model for a space-based EO/IR sensor, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker. Simulation and results using actual Geosynchronous Satellites are presented.
NASA Technical Reports Server (NTRS)
Freilich, M. H.; Pawka, S. S.
1987-01-01
The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.
A clinical sign to detect root avulsions of the posterior horn of the medial meniscus.
Seil, Romain; Dück, Klaus; Pape, Dietrich
2011-12-01
The goal of the present report was to describe a new clinical sign to make a clinical diagnosis of meniscal extrusion related to medial meniscal root avulsion. Description of an easy clinical sign to detect extrusion of the medial meniscus at the anteromedial joint line. A varus stress test was applied in full extension before and after transosseous repair of an isolated traumatic avulsion of the posterior root of the medial meniscus in a 21-year-old patient. The clinical sign was verified by sectioning of the meniscotibial ligament during knee arthroplasty surgery in 3 patients. With a deficient posterior root, the clinical sign was positive, showing anteromedial extrusion under varus stress. After repair and at clinical follow-up, extrusion was normalized. Making the clinical diagnosis of medial meniscus extrusion after knee injury by applying a simple varus stress test to the knee and palpating the anteromedial meniscal extrusion might help physicians to suspect a medial meniscus root tear in the early stages after the injury as well as to evaluate its reduction after repair. A varus stress test in full extension should be performed systematically in patients where a root tear of the medial meniscus is suspected as well as after surgery to evaluate the success of the repair.
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
NASA Astrophysics Data System (ADS)
Al-Mudhafar, W. J.
2013-12-01
Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance. In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability. Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values. The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product. Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly drawing datasets with replacement from the training data. Each sample has the same size of the original training set and it can be conducted N times to produce N bootstrap datasets to re-fit the model accordingly to decrease the squared difference between the estimated and observed categorical variables (facies) leading to decrease the degree of uncertainty.
Yeung, Carol K.L.; Tsai, Pi-Wen; Chesser, R. Terry; Lin, Rong-Chien; Yao, Cheng-Te; Tian, Xiu-Hua; Li, Shou-Hsien
2011-01-01
Although founder effect speciation has been a popular theoretical model for the speciation of geographically isolated taxa, its empirical importance has remained difficult to evaluate due to the intractability of past demography, which in a founder effect speciation scenario would involve a speciational bottleneck in the emergent species and the complete cessation of gene flow following divergence. Using regression-weighted approximate Bayesian computation, we tested the validity of these two fundamental conditions of founder effect speciation in a pair of sister species with disjunct distributions: the royal spoonbill Platalea regia in Australasia and the black-faced spoonbill Pl. minor in eastern Asia. When compared with genetic polymorphism observed at 20 nuclear loci in the two species, simulations showed that the founder effect speciation model had an extremely low posterior probability (1.55 × 10-8) of producing the extant genetic pattern. In contrast, speciation models that allowed for postdivergence gene flow were much more probable (posterior probabilities were 0.37 and 0.50 for the bottleneck with gene flow and the gene flow models, respectively) and postdivergence gene flow persisted for a considerable period of time (more than 80% of the divergence history in both models) following initial divergence (median = 197,000 generations, 95% credible interval [CI]: 50,000-478,000, for the bottleneck with gene flow model; and 186,000 generations, 95% CI: 45,000-477,000, for the gene flow model). Furthermore, the estimated population size reduction in Pl. regia to 7,000 individuals (median, 95% CI: 487-12,000, according to the bottleneck with gene flow model) was unlikely to have been severe enough to be considered a bottleneck. Therefore, these results do not support founder effect speciation in Pl. regia but indicate instead that the divergence between Pl. regia and Pl. minor was probably driven by selection despite continuous gene flow. In this light, we discuss the potential importance of evolutionarily labile traits with significant fitness consequences, such as migratory behavior and habitat preference, in facilitating divergence of the spoonbills.
Multilevel cervical laminectomy and fusion with posterior cervical cages
Bou Monsef, Jad N; Siemionow, Krzysztof B
2017-01-01
Context: Cervical spondylotic myelopathy (CSM) is a progressive disease that can result in significant disability. Single-level stenosis can be effectively decompressed through either anterior or posterior techniques. However, multilevel pathology can be challenging, especially in the presence of significant spinal stenosis. Three-level anterior decompression and fusion are associated with higher nonunion rates and prolonged dysphagia. Posterior multilevel laminectomies with foraminotomies jeopardize the bone stock required for stable fixation with lateral mass screws (LMSs). Aims: This is the first case series of multilevel laminectomy and fusion for CSM instrumented with posterior cervical cages. Settings and Design: Three patients presented with a history of worsening neck pain, numbness in bilateral upper extremities and gait disturbance, and examination findings consistent with myeloradiculopathy. Cervical magnetic resonance imaging demonstrated multilevel spondylosis resulting in moderate to severe bilateral foraminal stenosis at three cervical levels. Materials and Methods: The patients underwent a multilevel posterior cervical laminectomy and instrumented fusion with intervertebral cages placed between bilateral facet joints over three levels. Oswestry disability index and visual analog scores were collected preoperatively and at each follow-up. Pre- and post-operative images were analyzed for changes in cervical alignment and presence of arthrodesis. Results: Postoperatively, all patients showed marked improvement in neurological symptoms and neck pain. They had full resolution of radicular symptoms by 6 weeks postoperatively. At 12-month follow-up, they demonstrated solid arthrodesis on X-rays and computed tomography scan. Conclusions: Posterior cervical cages may be an alternative option to LMSs in multilevel cervical laminectomy and fusion for cervical spondylotic myeloradiculopathy. PMID:29403242
Heare, Austin; Kramer, Nicholas; Salib, Christopher; Mauffrey, Cyril
2017-07-01
Despite overall improved outcomes with open reduction and internal fixation of acetabular fractures, posterior wall fractures show disproportionately poor results. The effect of weight bearing on outcomes of fracture management has been investigated in many lower extremity fractures, but evidence-based recommendations in posterior wall acetabular fractures are lacking. The authors systematically reviewed the current literature to determine if a difference in outcome exists between early and late postoperative weight-bearing protocols for surgically managed posterior wall acetabular fractures. PubMed and MEDLINE were searched for posterior wall acetabular fracture studies that included weight-bearing protocols and Merle d'Aubigné functional scores. Twelve studies were identified. Each study was classified as either early or late weight bearing. Early weight bearing was defined as full, unrestricted weight bearing at or before 12 weeks postoperatively. Late weight bearing was defined as restricted weight bearing for greater than 12 weeks postoperatively. The 2 categories were then compared by functional score using a 2-tailed t test and by complication rate using chi-square analysis. Six studies (152 fractures) were placed in the early weight-bearing category. Six studies (302 fractures) were placed in the late weight-bearing category. No significant difference in Merle d'Aubigné functional scores was found between the 2 groups. No difference was found regarding heterotopic ossification, avascular necrosis, superficial infections, total infections, or osteoarthritis. This systematic review found no difference in functional outcome scores or complication rates between early and late weight-bearing protocols for surgically treated posterior wall fractures. [Orthopedics. 2017: 40(4):e652-e657.]. Copyright 2017, SLACK Incorporated.
ERIC Educational Resources Information Center
Henderson, John M.; Larson, Christine L.; Zhu, David C.
2008-01-01
We used fMRI to directly compare activation in two cortical regions previously identified as relevant to real-world scene processing: retrosplenial cortex and a region of posterior parahippocampal cortex functionally defined as the parahippocampal place area (PPA). We compared activation in these regions to full views of scenes from a global…
Suzuki, Teppei; Tani, Yuji; Ogasawara, Katsuhiko
2016-07-25
Consistent with the "attention, interest, desire, memory, action" (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. In the case of the keyword "clinic name," the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the keyword "clinic name and regional name," the probability for a repeated visit to the website and the mammography screening page was negative. In the case of the keyword "clinic name + medical examination," the visit probability to the website was positive, and the visit probability to the information page was negative. When visitors referred to the keywords "mammography screening," the visit probability to the mammography screening page was positive (95% highest posterior density interval = 3.38-26.66). Further analysis for not only the clinic website but also various other medical institution websites is necessary to build a general inspection model for medical institution websites; we want to consider this in future research. Additionally, we hope to use the results obtained in this study as a prior distribution for future work to conduct higher-precision analysis.
Tani, Yuji
2016-01-01
Background Consistent with the “attention, interest, desire, memory, action” (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. Objective We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. Methods We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. Results In the case of the keyword “clinic name,” the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the keyword “clinic name and regional name,” the probability for a repeated visit to the website and the mammography screening page was negative. In the case of the keyword “clinic name + medical examination,” the visit probability to the website was positive, and the visit probability to the information page was negative. When visitors referred to the keywords “mammography screening,” the visit probability to the mammography screening page was positive (95% highest posterior density interval = 3.38-26.66). Conclusions Further analysis for not only the clinic website but also various other medical institution websites is necessary to build a general inspection model for medical institution websites; we want to consider this in future research. Additionally, we hope to use the results obtained in this study as a prior distribution for future work to conduct higher-precision analysis. PMID:27457537
Syamal, Mausumi N; Bryson, Paul C
2017-02-01
Stress velopharyngeal insufficiency (SVPI) is an uncommon but often career-threatening condition affecting professional brass and woodwind musicians. To review the evaluation of and treatment for SVPI in professional musicians with lipoinjection to the posterior pharyngeal wall. A retrospective medical record and literature review. Two professional musicians with SVPI treated with autologous lipoinjection to the posterior pharyngeal wall were included. Nasopharyngoscopy was performed while patients played their instrument both before and after injection. To assess the effectiveness of autologous fat injection to the posterior pharyngeal wall to treat stress velopharyngeal insufficiency in 2 professional instrumentalists. Successful treatment was the absence of VPI during playing as visualized by flexible nasopharyngoscopy. After autologous lipoinjection of the posterior pharyngeal wall, 1 patient resumed full play with complete resolution, now 3 years after lipoinjection pharyngoplasty. The other patient received temporary resolution. Both had no surgical complications. Stress VPI is often a career-threatening condition for professional brass and woodwind musicians, with a cited incidence of 34%. Various treatment options in the literature include observation, speech and language pathology referral for pharyngeal strengthening, lipoinjection of the soft palate, and more invasive options, such as sphincter pharyngoplasty, pharyngeal flaps and V-Y pushback. Autologous fat injection pharyngoplasty of the posterior pharyngeal wall may be a less invasive treatment option for musicians with SVPI.
The internal consistency of the standard gamble: tests after adjusting for prospect theory.
Oliver, Adam
2003-07-01
This article reports a study that tests whether the internal consistency of the standard gamble can be improved upon by incorporating loss weighting and probability transformation parameters in the standard gamble valuation procedure. Five alternatives to the standard EU formulation are considered: (1) probability transformation within an EU framework; and, within a prospect theory framework, (2) loss weighting and full probability transformation, (3) no loss weighting and full probability transformation, (4) loss weighting and no probability transformation, and (5) loss weighting and partial probability transformation. Of the five alternatives, only the prospect theory formulation with loss weighting and no probability transformation offers an improvement in internal consistency over the standard EU valuation procedure.
The DNA database search controversy revisited: bridging the Bayesian-frequentist gap.
Storvik, Geir; Egeland, Thore
2007-09-01
Two different quantities have been suggested for quantification of evidence in cases where a suspect is found by a search through a database of DNA profiles. The likelihood ratio, typically motivated from a Bayesian setting, is preferred by most experts in the field. The so-called np rule has been suggested through frequentist arguments and has been suggested by the American National Research Council and Stockmarr (1999, Biometrics55, 671-677). The two quantities differ substantially and have given rise to the DNA database search controversy. Although several authors have criticized the different approaches, a full explanation of why these differences appear is still lacking. In this article we show that a P-value in a frequentist hypothesis setting is approximately equal to the result of the np rule. We argue, however, that a more reasonable procedure in this case is to use conditional testing, in which case a P-value directly related to posterior probabilities and the likelihood ratio is obtained. This way of viewing the problem bridges the gap between the Bayesian and frequentist approaches. At the same time it indicates that the np rule should not be used to quantify evidence.
LENSED: a code for the forward reconstruction of lenses and sources from strong lensing observations
NASA Astrophysics Data System (ADS)
Tessore, Nicolas; Bellagamba, Fabio; Metcalf, R. Benton
2016-12-01
Robust modelling of strong lensing systems is fundamental to exploit the information they contain about the distribution of matter in galaxies and clusters. In this work, we present LENSED, a new code which performs forward parametric modelling of strong lenses. LENSED takes advantage of a massively parallel ray-tracing kernel to perform the necessary calculations on a modern graphics processing unit (GPU). This makes the precise rendering of the background lensed sources much faster, and allows the simultaneous optimization of tens of parameters for the selected model. With a single run, the code is able to obtain the full posterior probability distribution for the lens light, the mass distribution and the background source at the same time. LENSED is first tested on mock images which reproduce realistic space-based observations of lensing systems. In this way, we show that it is able to recover unbiased estimates of the lens parameters, even when the sources do not follow exactly the assumed model. Then, we apply it to a subsample of the Sloan Lens ACS Survey lenses, in order to demonstrate its use on real data. The results generally agree with the literature, and highlight the flexibility and robustness of the algorithm.
Phylogeny and evolution of Newcastle disease virus genotypes isolated in Asia during 2008-2011.
Ebrahimi, Mohammad Majid; Shahsavandi, Shahla; Moazenijula, Gholamreza; Shamsara, Mahdi
2012-08-01
The full-length fusion (F) genes of 51 Newcastle disease (ND) strains isolated from chickens in Asia during the period 2008-2011 were genetically analyzed. Phylogenetic analysis showed that genotype VII of NDV still predominant in the domestic poultry of Asia. The sub-genotype VIIb circulated in the Iran and Indian sub-continent countries, whereas VIId sub-genotype existed in Far East countries. The non-synonymous to synonymous substitutions ratio was calculated 0.27 for VIId sub-genotype and 0.51 for VIIb sub-genotype indicates purifying/stabilizing selection which resulted in a low evolution rate in F gene of VIIb sub-genotype. There is evidence of localized positive selection when comparing these sub-genotypes protein sequences. Five codons in F gene of ND viruses had a posterior probability >90% using the Bayesian method, indicating these sites were under positive selection. To identify sites under positive selection; amino acid substitution classified depends on their radicalism and neutrality. The results indicate that although most positions were under purifying selection and can be eliminated, a few positions located in sub-genotype specific regions were subject to positive selection.
NASA Technical Reports Server (NTRS)
Yu, Jr-Kai; Holland, Linda Z.; Holland, Nicholas D.
2002-01-01
The full-length sequence and zygotic expression of an amphioxus nodal gene are described. Expression is first detected in the early gastrula just within the dorsal lip of the blastopore in a region of hypoblast that is probably comparable with the vertebrate Spemann's organizer. In the late gastrula and early neurula, expression remains bilaterally symmetrical, limited to paraxial mesoderm and immediately overlying regions of the neural plate. Later in the neurula stage, all neural expression disappears, and mesodermal expression disappears from the right side. All along the left side of the neurula, mesodermal expression spreads into the left side of the gut endoderm. Soon thereafter, all expression is down-regulated except near the anterior and posterior ends of the animal, where transcripts are still found in the mesoderm and endoderm on the left side. At this time, expression also begins in the ectoderm on the left side of the head, in the region where the mouth later forms. These results suggest that amphioxus and vertebrate nodal genes play evolutionarily conserved roles in establishing Spemann's organizer, patterning the mesoderm rostrocaudally and setting up the asymmetrical left-right axis of the body.
Bonomi, Massimiliano; Pellarin, Riccardo; Kim, Seung Joong; Russel, Daniel; Sundin, Bryan A.; Riffle, Michael; Jaschob, Daniel; Ramsden, Richard; Davis, Trisha N.; Muller, Eric G. D.; Sali, Andrej
2014-01-01
The use of in vivo Förster resonance energy transfer (FRET) data to determine the molecular architecture of a protein complex in living cells is challenging due to data sparseness, sample heterogeneity, signal contributions from multiple donors and acceptors, unequal fluorophore brightness, photobleaching, flexibility of the linker connecting the fluorophore to the tagged protein, and spectral cross-talk. We addressed these challenges by using a Bayesian approach that produces the posterior probability of a model, given the input data. The posterior probability is defined as a function of the dependence of our FRET metric FRETR on a structure (forward model), a model of noise in the data, as well as prior information about the structure, relative populations of distinct states in the sample, forward model parameters, and data noise. The forward model was validated against kinetic Monte Carlo simulations and in vivo experimental data collected on nine systems of known structure. In addition, our Bayesian approach was validated by a benchmark of 16 protein complexes of known structure. Given the structures of each subunit of the complexes, models were computed from synthetic FRETR data with a distance root-mean-squared deviation error of 14 to 17 Å. The approach is implemented in the open-source Integrative Modeling Platform, allowing us to determine macromolecular structures through a combination of in vivo FRETR data and data from other sources, such as electron microscopy and chemical cross-linking. PMID:25139910
Bayesian randomized clinical trials: From fixed to adaptive design.
Yin, Guosheng; Lam, Chi Kin; Shi, Haolun
2017-08-01
Randomized controlled studies are the gold standard for phase III clinical trials. Using α-spending functions to control the overall type I error rate, group sequential methods are well established and have been dominating phase III studies. Bayesian randomized design, on the other hand, can be viewed as a complement instead of competitive approach to the frequentist methods. For the fixed Bayesian design, the hypothesis testing can be cast in the posterior probability or Bayes factor framework, which has a direct link to the frequentist type I error rate. Bayesian group sequential design relies upon Bayesian decision-theoretic approaches based on backward induction, which is often computationally intensive. Compared with the frequentist approaches, Bayesian methods have several advantages. The posterior predictive probability serves as a useful and convenient tool for trial monitoring, and can be updated at any time as the data accrue during the trial. The Bayesian decision-theoretic framework possesses a direct link to the decision making in the practical setting, and can be modeled more realistically to reflect the actual cost-benefit analysis during the drug development process. Other merits include the possibility of hierarchical modeling and the use of informative priors, which would lead to a more comprehensive utilization of information from both historical and longitudinal data. From fixed to adaptive design, we focus on Bayesian randomized controlled clinical trials and make extensive comparisons with frequentist counterparts through numerical studies. Copyright © 2017 Elsevier Inc. All rights reserved.
A Bayesian approach to the modelling of α Cen A
NASA Astrophysics Data System (ADS)
Bazot, M.; Bourguignon, S.; Christensen-Dalsgaard, J.
2012-12-01
Determining the physical characteristics of a star is an inverse problem consisting of estimating the parameters of models for the stellar structure and evolution, and knowing certain observable quantities. We use a Bayesian approach to solve this problem for α Cen A, which allows us to incorporate prior information on the parameters to be estimated, in order to better constrain the problem. Our strategy is based on the use of a Markov chain Monte Carlo (MCMC) algorithm to estimate the posterior probability densities of the stellar parameters: mass, age, initial chemical composition, etc. We use the stellar evolutionary code ASTEC to model the star. To constrain this model both seismic and non-seismic observations were considered. Several different strategies were tested to fit these values, using either two free parameters or five free parameters in ASTEC. We are thus able to show evidence that MCMC methods become efficient with respect to more classical grid-based strategies when the number of parameters increases. The results of our MCMC algorithm allow us to derive estimates for the stellar parameters and robust uncertainties thanks to the statistical analysis of the posterior probability densities. We are also able to compute odds for the presence of a convective core in α Cen A. When using core-sensitive seismic observational constraints, these can rise above ˜40 per cent. The comparison of results to previous studies also indicates that these seismic constraints are of critical importance for our knowledge of the structure of this star.
NASA Astrophysics Data System (ADS)
Egozcue, J. J.; Pawlowsky-Glahn, V.; Ortego, M. I.
2005-03-01
Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0.
Leaché, Adam D; Banbury, Barbara L; Linkem, Charles W; de Oca, Adrián Nieto-Montes
2016-03-22
Resolving the short phylogenetic branches that result from rapid evolutionary diversification often requires large numbers of loci. We collected targeted sequence capture data from 585 nuclear loci (541 ultraconserved elements and 44 protein-coding genes) to estimate the phylogenetic relationships among iguanian lizards in the North American genus Sceloporus. We tested for diversification rate shifts to determine if rapid radiation in the genus is correlated with chromosomal evolution. The phylogenomic trees that we obtained for Sceloporus using concatenation and coalescent-based species tree inference provide strong support for the monophyly and interrelationships among nearly all major groups. The diversification analysis supported one rate shift on the Sceloporus phylogeny approximately 20-25 million years ago that is associated with the doubling of the speciation rate from 0.06 species/million years (Ma) to 0.15 species/Ma. The posterior probability for this rate shift occurring on the branch leading to the Sceloporus species groups exhibiting increased chromosomal diversity is high (posterior probability = 0.997). Despite high levels of gene tree discordance, we were able to estimate a phylogenomic tree for Sceloporus that solves some of the taxonomic problems caused by previous analyses of fewer loci. The taxonomic changes that we propose using this new phylogenomic tree help clarify the number and composition of the major species groups in the genus. Our study provides new evidence for a putative link between chromosomal evolution and the rapid divergence and radiation of Sceloporus across North America.
Grassini, Simone; Holm, Suvi K; Railo, Henry; Koivisto, Mika
2016-12-01
Snakes were probably one of the earliest predators of primates, and snake images produce specific behavioral and electrophysiological reactions in humans. Pictures of snakes evoke enhanced activity over the occipital cortex, indexed by the "early posterior negativity" (EPN), as compared with pictures of other dangerous or non-dangerous animals. The present study investigated the possibility that the response to snake images is independent from visual awareness. The observers watched images of threatening and non-threatening animals presented in random order during rapid serial visual presentation. Four different masking conditions were used to manipulate awareness of the images. Electrophysiological results showed that the EPN was larger for snake images than for the other images employed in the unmasked condition. However, the difference disappeared when awareness of the stimuli decreased. Behavioral results on the effects of awareness did not show any advantage for snake images. Copyright © 2016 Elsevier B.V. All rights reserved.
Model selection and Bayesian inference for high-resolution seabed reflection inversion.
Dettmer, Jan; Dosso, Stan E; Holland, Charles W
2009-02-01
This paper applies Bayesian inference, including model selection and posterior parameter inference, to inversion of seabed reflection data to resolve sediment structure at a spatial scale below the pulse length of the acoustic source. A practical approach to model selection is used, employing the Bayesian information criterion to decide on the number of sediment layers needed to sufficiently fit the data while satisfying parsimony to avoid overparametrization. Posterior parameter inference is carried out using an efficient Metropolis-Hastings algorithm for high-dimensional models, and results are presented as marginal-probability depth distributions for sound velocity, density, and attenuation. The approach is applied to plane-wave reflection-coefficient inversion of single-bounce data collected on the Malta Plateau, Mediterranean Sea, which indicate complex fine structure close to the water-sediment interface. This fine structure is resolved in the geoacoustic inversion results in terms of four layers within the upper meter of sediments. The inversion results are in good agreement with parameter estimates from a gravity core taken at the experiment site.
INFERRING THE ECCENTRICITY DISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogg, David W.; Bovy, Jo; Myers, Adam D., E-mail: david.hogg@nyu.ed
2010-12-20
Standard maximum-likelihood estimators for binary-star and exoplanet eccentricities are biased high, in the sense that the estimated eccentricity tends to be larger than the true eccentricity. As with most non-trivial observables, a simple histogram of estimated eccentricities is not a good estimate of the true eccentricity distribution. Here, we develop and test a hierarchical probabilistic method for performing the relevant meta-analysis, that is, inferring the true eccentricity distribution, taking as input the likelihood functions for the individual star eccentricities, or samplings of the posterior probability distributions for the eccentricities (under a given, uninformative prior). The method is a simple implementationmore » of a hierarchical Bayesian model; it can also be seen as a kind of heteroscedastic deconvolution. It can be applied to any quantity measured with finite precision-other orbital parameters, or indeed any astronomical measurements of any kind, including magnitudes, distances, or photometric redshifts-so long as the measurements have been communicated as a likelihood function or a posterior sampling.« less
Sequential bearings-only-tracking initiation with particle filtering method.
Liu, Bin; Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation.
Aguero-Valverde, Jonathan
2013-01-01
In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.
Bogousslavsky, J; Miklossy, J; Deruaz, J P; Assal, G; Regli, F
1987-01-01
A macular-sparing superior altitudinal hemianopia with no visuo-psychic disturbance, except impaired visual learning, was associated with bilateral ischaemic necrosis of the lingual gyrus and only partial involvement of the fusiform gyrus on the left side. It is suggested that bilateral destruction of the lingual gyrus alone is not sufficient to affect complex visual processing. The fusiform gyrus probably has a critical role in colour integration, visuo-spatial processing, facial recognition and corresponding visual imagery. Involvement of the occipitotemporal projection system deep to the lingual gyri probably explained visual memory dysfunction, by a visuo-limbic disconnection. Impaired verbal memory may have been due to posterior involvement of the parahippocampal gyrus and underlying white matter, which may have disconnected the intact speech areas from the left medial temporal structures. Images PMID:3585386
Hidden Markov models for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J. (Inventor)
1995-01-01
The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) (vertical bar)/x), 1 less than or equal to i isless than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.
Hidden Markov models for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J. (Inventor)
1993-01-01
The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) perpendicular to x), 1 less than or equal to i is less than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.
Lovelock, Paul K; Spurdle, Amanda B; Mok, Myth TS; Farrugia, Daniel J; Lakhani, Sunil R; Healey, Sue; Arnold, Stephen; Buchanan, Daniel; Investigators, kConFab; Couch, Fergus J; Henderson, Beric R; Goldgar, David E; Tavtigian, Sean V; Chenevix-Trench, Georgia; Brown, Melissa A
2007-01-01
Introduction Many of the DNA sequence variants identified in the breast cancer susceptibility gene BRCA1 remain unclassified in terms of their potential pathogenicity. Both multifactorial likelihood analysis and functional approaches have been proposed as a means to elucidate likely clinical significance of such variants, but analysis of the comparative value of these methods for classifying all sequence variants has been limited. Methods We have compared the results from multifactorial likelihood analysis with those from several functional analyses for the four BRCA1 sequence variants A1708E, G1738R, R1699Q, and A1708V. Results Our results show that multifactorial likelihood analysis, which incorporates sequence conservation, co-inheritance, segregation, and tumour immunohistochemical analysis, may improve classification of variants. For A1708E, previously shown to be functionally compromised, analysis of oestrogen receptor, cytokeratin 5/6, and cytokeratin 14 tumour expression data significantly strengthened the prediction of pathogenicity, giving a posterior probability of pathogenicity of 99%. For G1738R, shown to be functionally defective in this study, immunohistochemistry analysis confirmed previous findings of inconsistent 'BRCA1-like' phenotypes for the two tumours studied, and the posterior probability for this variant was 96%. The posterior probabilities of R1699Q and A1708V were 54% and 69%, respectively, only moderately suggestive of increased risk. Interestingly, results from functional analyses suggest that both of these variants have only partial functional activity. R1699Q was defective in foci formation in response to DNA damage and displayed intermediate transcriptional transactivation activity but showed no evidence for centrosome amplification. In contrast, A1708V displayed an intermediate transcriptional transactivation activity and a normal foci formation response in response to DNA damage but induced centrosome amplification. Conclusion These data highlight the need for a range of functional studies to be performed in order to identify variants with partially compromised function. The results also raise the possibility that A1708V and R1699Q may be associated with a low or moderate risk of cancer. While data pooling strategies may provide more information for multifactorial analysis to improve the interpretation of the clinical significance of these variants, it is likely that the development of current multifactorial likelihood approaches and the consideration of alternative statistical approaches will be needed to determine whether these individually rare variants do confer a low or moderate risk of breast cancer. PMID:18036263
Study of axonal dystrophy. II Dystrophy and atrophy of the presynaptic boutons: a dual pathology.
Fujisawa, K; Shiraki, H
1980-01-01
In succession to the previous quantitative work, a qualitative study has been carried out on the nature of a dual pathology affecting presynaptic boutons in the posterior tract nuclei of ageing rats. Based on the morphology of dystrophic boutons in early stage, it is concluded that the initial and therefore essential characteristic of dystrophic process is an abnormal increase of normal axonal components within the presynaptic boutons, and that various abnormal substructures of spheroids hitherto reported in the literature are probably the results of their secondary metamorphosis. The dystrophic process within the posterior tract nuclei is a selective one, involving presynaptic boutons and preterminal axons only of the posterior tract fibres. Comparison of the frequency of early dystrophic boutons and of fully grown-up spheroids indicates that a small percentage of boutons deriving from posterior tract fibres become dystrophic and of these dystrophic boutons only a small percentage again continue to develop unto large spheroids, throughout lifespan of the animals. On the other hand, in search of a morphological counterpart for the age-related decrease of volume ratio of presynaptic boutons to the neuropil, some dubious atrophic changes were also found in presynaptic boutons, which could have been easily missed from observation if studied qualitatively alone. Accordingly, no less numerous boutons other than dystrophic ones are supposed to atrophy 'independently' and to disappear 'silently' during the same period. The dystrophic and the atrophic changes involve different boutons (of different or the same terminal axons) within the same gray matter. This dual pathology of boutons needs further elucidation of its neurocytopathological as well as neurobiological background in the future.
Verdugo, Cristobal; Valdes, Maria Francisca; Salgado, Miguel
2018-06-01
This study aimed to estimate the distributions of the within-herd true prevalence (TP) and the annual clinical incidence proportion (CIp) of Mycobacterium avium subsp. paratuberculosis (MAP) infection in dairy cattle herds in Chile. Forty two commercial herds with antecedents of MAP infection were randomly selected to participate in the study. In small herds (≤30 cows), serum samples were collected from all animals present. Whereas, in larger herds, milk or serum samples were collected from all milking cows with 2 or more parities. Samples were analysed using the Pourquier® ELISA PARATUBERCULOSIS (Insitute Pourquier, France) test. Moreover, a questionnaire gathering information on management practices and the frequency of clinical cases, compatible with paratuberculosis (in the previous 12 months), was applied on the sampling date. A Bayesian latent class analysis was used to obtain TP and clinical incidence posterior distributions. The model adjusts for uncertainty in test sensitivity (serum or milk) and specificity, and prior TP & CIp estimates. A total of 4963 animals were tested, with an average contribution of 124 samples per herd. A mean apparent prevalence of 6.3% (95% confidence interval: 4.0-8.0%) was observed. Model outputs indicated an overall TP posterior distribution, across herds, with a median of 13.1% (95% posterior probability interval (PPI); 3.2-38.1%). A high TP variability was observed between herds. CIp presented a posterior median of 1.1% (95% PPI; 0.2-4.6%). Model results complement information missing from previously conducted epidemiological studies in the sector, and they could be used for further assessment of the disease impact and planning of control programs. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby
2013-12-01
This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.
[The case of completed pregnancy of the patient with Dandy-Walker malformation].
Beliaeva, E V; Lapshina, L V; Shaposhnikova, E V; Molgachev, A A
2018-01-01
Dandy-Walker malformation is a rare disease of the central nervous system pathology (congenital malformations of the fossa cranii posterior). The key features of this syndrome are an enlargement of the fourth ventricle; complete absence of the cerebellar vermis, the posterior midline area of cerebellar cortex responsible for coordination of the axial musculature; and cyst formation near the internal base of the skull. Pregnant patients with Dandy-Walker malformation are at high risk and are managed by multidisciplinary teams including neurologists and obstetricians. We present a case report of full-term pregnancy and uncomplicated delivery in a women with Dandy-Walker malformation.
Arteriovenous Fistula Development After Posterior Compartment Fasciotomy to Treat Shin Splints.
Marotta, J J; Richmond, J C
1988-12-01
In brief: This case report presents an unusual complication in a distance runner who was treated for the shin splint syndrome. Following release of the deep posterior fascial compartment, he had mild erythema and swelling in the region of his incision. He could not bear full weight on his left leg because of pain. An arteriogram obtained approximately six months later showed an arteriovenous fistula, which was subsequently treated with resection and neurolysis of the saphenous nerve. The patient improved but did not reach his previous level of athletic performance. Recommendations for preventing this complication are outlined, and the use of the term shin splints is discussed.
Ziegler, Andreas; Neues, Frank; Janáček, Jiří; Beckmann, Felix; Epple, Matthias
2017-01-01
Terrestrial isopods moult first the posterior and then the anterior half of the body, allowing for storage and recycling of CaCO 3 . We used synchrotron-radiation microtomography to estimate mineral content within skeletal segments in sequential moulting stages of Porcellio scaber. The results suggest that all examined cuticular segments contribute to storage and recycling, however, to varying extents. The mineral within the hepatopancreas after moult suggests an uptake of mineral from the ingested exuviae. The total maximum loss of mineral was 46% for the anterior and 43% for the posterior cuticle. The time course of resorption of mineral and mineralisation of the new cuticle suggests storage and recycling of mineral in the posterior and anterior cuticle. The mineral in the anterior pereiopods decreases by 25% only. P. scaber has long legs and can run fast; therefore, a less mineralised and thus lightweight cuticle in pereiopods likely serves to lower energy consumption during escape behaviour. Differential demineralisation occurs in the head cuticle, in which the cornea of the complex eyes remains completely mineralised. The partes incisivae of the mandibles are mineralised before the old cuticle is demineralised and shed. Probably, this enables the animal to ingest the old exuviae after each half moult. Copyright © 2016 Elsevier Ltd. All rights reserved.
Infundibular dilatation of the posterior communicating artery in a defined population.
Vlajković, Slobodan; Vasović, Ljiljana; Trandafilović, Milena; Jovanović, Ivan; Ugrenović, Slađana; Dorđević, Gordana
2015-01-01
Unusual widening of the posterior communicating artery (PCoA) at its beginning from the cerebral portion of the internal carotid artery (ICA) was described as its infundibular dilatation (ID). A possibility of ID rupture or progression to aneurysm was the reason for an investigation of its frequency and morphologic features in specimens of the Serbian population. Cerebral arteries on the brain base of 267 adult cadavers of both genders and varying age and causes of death were dissected. The images of the PCoA in 190 fetuses were also reviewed. ID of the PCoA was defined as a funnel shaped beginning of different width from ICA, wherein PCoA continues from ID apex to the posterior cerebral artery. There were no cases of ID in fetuses. ID and aneurysms of the PCoA were found in 6/267 or 2.2% and 3/267 or 1.12% of adults, respectively. Unilaterally, they existed on the left side and, frequently, in male cases aging 70 years and older, that had died without cerebral cause. Bilaterally, ID was found in 2/6 cases. There was only one case of ID and aneurysm of the PCoA, but from the ID. We are of the opinion that ID of the PCoA only develops postnatally and probably is due to the influence of hemodynamic factors or hypertension. Copyright © 2014 Elsevier GmbH. All rights reserved.
On parametrized cold dense matter equation-of-state inference
NASA Astrophysics Data System (ADS)
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-07-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrized dense matter equations of state. In particular, we generalize and examine two inference paradigms from the literature: (i) direct posterior equation-of-state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective while the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilizing archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation-of-state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.
On parametrised cold dense matter equation of state inference
NASA Astrophysics Data System (ADS)
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-04-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrised dense matter equations of state. In particular we generalise and examine two inference paradigms from the literature: (i) direct posterior equation of state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective whilst the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilising archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation of state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.
Krajcovicova, Lenka; Mikl, Michal; Marecek, Radek; Rektorova, Irena
2014-01-01
Changes in connectivity of the posterior node of the default mode network (DMN) were studied when switching from baseline to a cognitive task using functional magnetic resonance imaging. In all, 15 patients with mild to moderate Alzheimer's disease (AD) and 18 age-, gender-, and education-matched healthy controls (HC) participated in the study. Psychophysiological interactions analysis was used to assess the specific alterations in the DMN connectivity (deactivation-based) due to psychological effects from the complex visual scene encoding task. In HC, we observed task-induced connectivity decreases between the posterior cingulate and middle temporal and occipital visual cortices. These findings imply successful involvement of the ventral visual pathway during the visual processing in our HC cohort. In AD, involvement of the areas engaged in the ventral visual pathway was observed only in a small volume of the right middle temporal gyrus. Additional connectivity changes (decreases) in AD were present between the posterior cingulate and superior temporal gyrus when switching from baseline to task condition. These changes are probably related to both disturbed visual processing and the DMN connectivity in AD and reflect deficits and compensatory mechanisms within the large scale brain networks in this patient population. Studying the DMN connectivity using psychophysiological interactions analysis may provide a sensitive tool for exploring early changes in AD and their dynamics during the disease progression.
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse
A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).
Neitlich, Peter N; Ver Hoef, Jay M; Berryman, Shanti D; Mines, Anaka; Geiser, Linda H; Hasselbach, Linda M; Shiel, Alyssa E
2017-01-01
Spatial patterns of Zn, Pb and Cd deposition in Cape Krusenstern National Monument (CAKR), Alaska, adjacent to the Red Dog Mine haul road, were characterized in 2001 and 2006 using Hylocomium moss tissue as a biomonitor. Elevated concentrations of Cd, Pb, and Zn in moss tissue decreased logarithmically away from the haul road and the marine port. The metals concentrations in the two years were compared using Bayesian posterior predictions on a new sampling grid to which both data sets were fit. Posterior predictions were simulated 200 times both on a coarse grid of 2,357 points and by distance-based strata including subsets of these points. Compared to 2001, Zn and Pb concentrations in 2006 were 31 to 54% lower in the 3 sampling strata closest to the haul road (0-100, 100-2000 and 2000-4000 m). Pb decreased by 40% in the stratum 4,000-5,000 m from the haul road. Cd decreased significantly by 38% immediately adjacent to the road (0-100m), had an 89% probability of a small decrease 100-2000 m from the road, and showed moderate probabilities (56-71%) for increase at greater distances. There was no significant change over time (with probabilities all ≤ 85%) for any of the 3 elements in more distant reference areas (40-60 km). As in 2001, elemental concentrations in 2006 were higher on the north side of the road. Reductions in deposition have followed a large investment in infrastructure to control fugitive dust escapement at the mine and port sites, operational controls, and road dust mitigation. Fugitive dust escapement, while much reduced, is still resulting in elevated concentrations of Zn, Pb and Cd out to 5,000 m from the haul road. Zn and Pb levels were slightly above arctic baseline values in southern CAKR reference areas.
Ver Hoef, Jay M.; Berryman, Shanti D.; Mines, Anaka; Geiser, Linda H.; Hasselbach, Linda M.; Shiel, Alyssa E.
2017-01-01
Spatial patterns of Zn, Pb and Cd deposition in Cape Krusenstern National Monument (CAKR), Alaska, adjacent to the Red Dog Mine haul road, were characterized in 2001 and 2006 using Hylocomium moss tissue as a biomonitor. Elevated concentrations of Cd, Pb, and Zn in moss tissue decreased logarithmically away from the haul road and the marine port. The metals concentrations in the two years were compared using Bayesian posterior predictions on a new sampling grid to which both data sets were fit. Posterior predictions were simulated 200 times both on a coarse grid of 2,357 points and by distance-based strata including subsets of these points. Compared to 2001, Zn and Pb concentrations in 2006 were 31 to 54% lower in the 3 sampling strata closest to the haul road (0–100, 100–2000 and 2000–4000 m). Pb decreased by 40% in the stratum 4,000–5,000 m from the haul road. Cd decreased significantly by 38% immediately adjacent to the road (0–100m), had an 89% probability of a small decrease 100–2000 m from the road, and showed moderate probabilities (56–71%) for increase at greater distances. There was no significant change over time (with probabilities all ≤ 85%) for any of the 3 elements in more distant reference areas (40–60 km). As in 2001, elemental concentrations in 2006 were higher on the north side of the road. Reductions in deposition have followed a large investment in infrastructure to control fugitive dust escapement at the mine and port sites, operational controls, and road dust mitigation. Fugitive dust escapement, while much reduced, is still resulting in elevated concentrations of Zn, Pb and Cd out to 5,000 m from the haul road. Zn and Pb levels were slightly above arctic baseline values in southern CAKR reference areas. PMID:28542369
Sun, Hao; Zhou, Chi; Huang, Xiaoqin; Lin, Keqin; Shi, Lei; Yu, Liang; Liu, Shuyuan; Chu, Jiayou; Yang, Zhaoqing
2013-01-01
Tai people are widely distributed in Thailand, Laos and southwestern China and are a large population of Southeast Asia. Although most anthropologists and historians agree that modern Tai people are from southwestern China and northern Thailand, the place from which they historically migrated remains controversial. Three popular hypotheses have been proposed: northern origin hypothesis, southern origin hypothesis or an indigenous origin. We compared the genetic relationships between the Tai in China and their "siblings" to test different hypotheses by analyzing 10 autosomal microsatellites. The genetic data of 916 samples from 19 populations were analyzed in this survey. The autosomal STR data from 15 of the 19 populations came from our previous study (Lin et al., 2010). 194 samples from four additional populations were genotyped in this study: Han (Yunnan), Dai (Dehong), Dai (Yuxi) and Mongolian. The results of genetic distance comparisons, genetic structure analyses and admixture analyses all indicate that populations from northern origin hypothesis have large genetic distances and are clearly differentiated from the Tai. The simulation-based ABC analysis also indicates this. The posterior probability of the northern origin hypothesis is just 0.04 [95%CI: (0.01-0.06)]. Conversely, genetic relationships were very close between the Tai and populations from southern origin or an indigenous origin hypothesis. Simulation-based ABC analyses were also used to distinguish the southern origin hypothesis from the indigenous origin hypothesis. The results indicate that the posterior probability of the southern origin hypothesis [0.640, 95%CI: (0.524-0.757)] is greater than that of the indigenous origin hypothesis [0.324, 95%CI: (0.211-0.438)]. Therefore, we propose that the genetic evidence does not support the hypothesis of northern origin. Our genetic data indicate that the southern origin hypothesis has higher probability than the other two hypotheses statistically, suggesting that the Tai people most likely originated from southern China.
Bayesian ionospheric multi-instrument 3D tomography
NASA Astrophysics Data System (ADS)
Norberg, Johannes; Vierinen, Juha; Roininen, Lassi
2017-04-01
The tomographic reconstruction of ionospheric electron densities is an inverse problem that cannot be solved without relatively strong regularising additional information. % Especially the vertical electron density profile is determined predominantly by the regularisation. % %Often utilised regularisations in ionospheric tomography include smoothness constraints and iterative methods with initial ionospheric models. % Despite its crucial role, the regularisation is often hidden in the algorithm as a numerical procedure without physical understanding. % % The Bayesian methodology provides an interpretative approach for the problem, as the regularisation can be given in a physically meaningful and quantifiable prior probability distribution. % The prior distribution can be based on ionospheric physics, other available ionospheric measurements and their statistics. % Updating the prior with measurements results as the posterior distribution that carries all the available information combined. % From the posterior distribution, the most probable state of the ionosphere can then be solved with the corresponding probability intervals. % Altogether, the Bayesian methodology provides understanding on how strong the given regularisation is, what is the information gained with the measurements and how reliable the final result is. % In addition, the combination of different measurements and temporal development can be taken into account in a very intuitive way. However, a direct implementation of the Bayesian approach requires inversion of large covariance matrices resulting in computational infeasibility. % In the presented method, Gaussian Markov random fields are used to form a sparse matrix approximations for the covariances. % The approach makes the problem computationally feasible while retaining the probabilistic and physical interpretation. Here, the Bayesian method with Gaussian Markov random fields is applied for ionospheric 3D tomography over Northern Europe. % Multi-instrument measurements are utilised from TomoScand receiver network for Low Earth orbit beacon satellite signals, GNSS receiver networks, as well as from EISCAT ionosondes and incoherent scatter radars. % %The performance is demonstrated in three-dimensional spatial domain with temporal development also taken into account.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
Menegaux, Aurore; Meng, Chun; Neitzel, Julia; Bäuml, Josef G; Müller, Hermann J; Bartmann, Peter; Wolke, Dieter; Wohlschläger, Afra M; Finke, Kathrin; Sorg, Christian
2017-04-15
Preterm birth is associated with an increased risk for lasting changes in both the cortico-thalamic system and attention; however, the link between cortico-thalamic and attention changes is as yet little understood. In preterm newborns, cortico-cortical and cortico-thalamic structural connectivity are distinctively altered, with increased local clustering for cortico-cortical and decreased integrity for cortico-thalamic connectivity. In preterm-born adults, among the various attention functions, visual short-term memory (vSTM) capacity is selectively impaired. We hypothesized distinct associations between vSTM capacity and the structural integrity of cortico-thalamic and cortico-cortical connections, respectively, in preterm-born adults. A whole-report paradigm of briefly presented letter arrays based on the computationally formalized Theory of Visual Attention (TVA) was used to quantify parameter vSTM capacity in 26 preterm- and 21 full-term-born adults. Fractional anisotropy (FA) of posterior thalamic radiations and the splenium of the corpus callosum obtained by diffusion tensor imaging were analyzed by tract-based spatial statistics and used as proxies for cortico-thalamic and cortico-cortical structural connectivity. The relationship between vSTM capacity and cortico-thalamic and cortico-cortical connectivity, respectively, was significantly modified by prematurity. In full-term-born adults, the higher FA in the right posterior thalamic radiation the higher vSTM capacity; in preterm-born adults this FA-vSTM-relationship was inversed. In the splenium, higher FA was correlated with higher vSTM capacity in preterm-born adults, whereas no significant relationship was evident in full-term-born adults. These results indicate distinct associations between cortico-thalamic and cortico-cortical integrity and vSTM capacity in preterm-and full-term-born adults. Data suggest compensatory cortico-cortical fiber re-organization for attention deficits after preterm delivery. Copyright © 2017 Elsevier Inc. All rights reserved.
Effect of fetal position on second-stage duration and labor outcome.
Senécal, Julie; Xiong, Xu; Fraser, William D
2005-04-01
To evaluate the effect of fetal position on 1) second-stage labor duration and 2) indicators of maternal and neonatal morbidity. A retrospective cohort study was conducted using a database from a previously reported randomized clinical trial. The data set includes 210 women with the fetus in a posterior position, 200 women with the fetus in a transverse position, and 1,198 women with the fetus in an anterior position. Mean durations of the second stage of labor for different fetal positions were compared using Tukey studentized test. A multivariate logistic regression model was performed to examine the determinants of prolonged second-stage duration (>or= 3 hours). Kaplan-Meier survival curves were used to graph and compare the duration of the second stage of labor for spontaneous delivery according to the fetal position at full dilatation and study group. Fetal malposition at full dilatation was associated with a significantly increased risk of instrumental vaginal delivery, cesarean delivery, oxytocin administration before full cervical dilatation, episiotomy, severe perineal laceration, and maternal blood loss of more than 500 mL (all P values < .01). Compared with the occiput anterior positions, there were significant differences in the duration of the second stage of labor, with a mean of 3.1 hours (95% confidence interval [CI] 3.0-3.2) for occiput anterior positions, 3.6 hours (95% CI 3.3-3.9) for occiput transverse positions (P < .05), and 3.8 hours (95% CI 3.5-4.1) for occiput posterior positions (P < .05) in the delayed pushing group. For the early pushing group, means were 2.2 hours (95% CI 2.1-2.3) for occiput anterior positions, 2.5 hours (95% CI 2.3-2.8) for occiput transverse positions (P < .05), and 3.0 hours (95% CI 2.7-3.3) for occiput posterior positions (P < .05). Fetal malposition at full dilatation results in a higher risk of prolonged second stage of labor and increases maternal morbidity indicators. II-2.
NASA Astrophysics Data System (ADS)
Caticha, Ariel
2011-03-01
In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.
A computer program for estimation from incomplete multinomial data
NASA Technical Reports Server (NTRS)
Credeur, K. R.
1978-01-01
Coding is given for maximum likelihood and Bayesian estimation of the vector p of multinomial cell probabilities from incomplete data. Also included is coding to calculate and approximate elements of the posterior mean and covariance matrices. The program is written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system (NOS) 1.1. The program requires approximately 44000 octal locations of core storage. A typical case requires from 72 seconds to 92 seconds on CYBER 175 depending on the value of the prior parameter.
Facilitating normative judgments of conditional probability: frequency or nested sets?
Yamagishi, Kimihiko
2003-01-01
Recent probability judgment research contrasts two opposing views. Some theorists have emphasized the role of frequency representations in facilitating probabilistic correctness; opponents have noted that visualizing the probabilistic structure of the task sufficiently facilitates normative reasoning. In the current experiment, the following conditional probability task, an isomorph of the "Problem of Three Prisoners" was tested. "A factory manufactures artificial gemstones. Each gemstone has a 1/3 chance of being blurred, a 1/3 chance of being cracked, and a 1/3 chance of being clear. An inspection machine removes all cracked gemstones, and retains all clear gemstones. However, the machine removes 1/2 of the blurred gemstones. What is the chance that a gemstone is blurred after the inspection?" A 2 x 2 design was administered. The first variable was the use of frequency instruction. The second manipulation was the use of a roulette-wheel diagram that illustrated a "nested-sets" relationship between the prior and the posterior probabilities. Results from two experiments showed that frequency alone had modest effects, while the nested-sets instruction achieved a superior facilitation of normative reasoning. The third experiment compared the roulette-wheel diagram to tree diagrams that also showed the nested-sets relationship. The roulette-wheel diagram outperformed the tree diagrams in facilitation of probabilistic reasoning. Implications for understanding the nature of intuitive probability judgments are discussed.
Lai, Tso-Ting; Yang, Chung-May
2017-05-18
To report findings and surgical outcomes of lamellar macular hole (LMH) or full-thickness macular hole (FTMH) accompanied by lamellar hole-associated epiretinal proliferation (LHEP) in eyes with high myopia (HM). Consecutive cases of HM with LMH or FTMH containing LHEP were retrospectively reviewed (study group, 43 cases). Cases of HM without LHEP (22) and those of non-HM with LHEP (30) served as Control A and B. The study group showed larger (928.7 ± 381.9 μm) and deeper (remained base thickness: 79.7 ± 23.7 μm) LMH retinal defect than that in Control A (466.2 ± 179.1 and 99.9 ± 24.9) and B (647.1 ± 346.7 and 99.1 ± 38.1). Lamellar hole-associated epiretinal proliferation in the study group had a higher rate of wide extension (42.3%) and growing along the posterior hyaloid (PH, 53.8%). Patients with LMH who underwent surgery in the study group and Control A showed limited best corrected visual acuity (BCVA) improvement (0-1 and 1-2 ETDRS lines, respectively), while Control B had significant improvement (4-5 lines). For full-thickness macular holes, the study group was the youngest (50.0 ± 11.4) and LHEP was more likely to grow on the posterior hyaloid (23.5%); the postoperative best corrected visual acuity, however, was similar to that in Control A (20/63-20/80). Lamellar hole-associated epiretinal proliferation in HM tended to be more widespread and adherent to the posterior hyaloid than in eyes without HM. Visual outcomes after LMH repair in eyes with LHEP and HM are less favorable than eyes with LHEP and without HM, but similar to eyes with HM and without LHEP.
Eriksrud, Ola; Federolf, Peter; Anderson, Patrick; Cabri, Jan
2018-01-01
Tests of dynamic postural control eliciting full-body three-dimensional joint movements in a systematic manner are scarce. The well-established star excursion balance test (SEBT) elicits primarily three-dimensional lower extremity joint movements with minimal trunk and no upper extremity joint movements. In response to these shortcomings we created the hand reach star excursion balance test (HSEBT) based on the SEBT reach directions. The aims of the current study were to 1) compare HSEBT and SEBT measurements, 2) compare joint movements elicited by the HSEBT to both SEBT joint movements and normative range of motion values published in the literature. Ten SEBT and HSEBT reaches for each foot were obtained while capturing full-body kinematics in twenty recreationally active healthy male subjects. HSEBT and SEBT areas and composite scores (sum of reaches) for total, anterior and posterior subsections and individual reaches were correlated. Total reach score comparisons showed fair to moderate correlations (r = .393 to .606), while anterior and posterior subsections comparisons had fair to good correlations (r = .269 to .823). Individual reach comparisons had no to good correlations (r = -.182 to .822) where lateral and posterior reaches demonstrated the lowest correlations (r = -.182 to .510). The HSEBT elicited more and significantly greater joint movements than the SEBT, except for hip external rotation, knee extension and plantarflexion. Comparisons to normative range of motion values showed that 3 of 18 for the SEBT and 8 of 22 joint movements for the HSEBT were within normative values. The findings suggest that the HSEBT can be used for the assessment of dynamic postural control and is particularly suitable for examining full-body functional mobility.
Häusler, Alexander Niklas; Oroz Artigas, Sergio; Trautner, Peter; Weber, Bernd
2016-01-01
People differ in the way they approach and handle choices with unsure outcomes. In this study, we demonstrate that individual differences in the neural processing of gains and losses relates to attentional differences in the way individuals search for information in gambles. Fifty subjects participated in two independent experiments. Participants first completed an fMRI experiment involving financial gains and losses. Subsequently, they performed an eye-tracking experiment on binary choices between risky gambles, each displaying monetary outcomes and their respective probabilities. We find that individual differences in gain and loss processing relate to attention distribution. Individuals with a stronger reaction to gains in the ventromedial prefrontal cortex paid more attention to monetary amounts, while a stronger reaction in the ventral striatum to losses was correlated with an increased attention to probabilities. Reaction in the posterior cingulate cortex to losses was also found to correlate with an increased attention to probabilities. Our data show that individual differences in brain activity and differences in information search processes are closely linked.
Development and neurophysiology of mentalizing.
Frith, Uta; Frith, Christopher D
2003-01-01
The mentalizing (theory of mind) system of the brain is probably in operation from ca. 18 months of age, allowing implicit attribution of intentions and other mental states. Between the ages of 4 and 6 years explicit mentalizing becomes possible, and from this age children are able to explain the misleading reasons that have given rise to a false belief. Neuroimaging studies of mentalizing have so far only been carried out in adults. They reveal a system with three components consistently activated during both implicit and explicit mentalizing tasks: medial prefrontal cortex (MPFC), temporal poles and posterior superior temporal sulcus (STS). The functions of these components can be elucidated, to some extent, from their role in other tasks used in neuroimaging studies. Thus, the MPFC region is probably the basis of the decoupling mechanism that distinguishes mental state representations from physical state representations; the STS region is probably the basis of the detection of agency, and the temporal poles might be involved in access to social knowledge in the form of scripts. The activation of these components in concert appears to be critical to mentalizing. PMID:12689373
Trautner, Peter
2016-01-01
Abstract People differ in the way they approach and handle choices with unsure outcomes. In this study, we demonstrate that individual differences in the neural processing of gains and losses relates to attentional differences in the way individuals search for information in gambles. Fifty subjects participated in two independent experiments. Participants first completed an fMRI experiment involving financial gains and losses. Subsequently, they performed an eye-tracking experiment on binary choices between risky gambles, each displaying monetary outcomes and their respective probabilities. We find that individual differences in gain and loss processing relate to attention distribution. Individuals with a stronger reaction to gains in the ventromedial prefrontal cortex paid more attention to monetary amounts, while a stronger reaction in the ventral striatum to losses was correlated with an increased attention to probabilities. Reaction in the posterior cingulate cortex to losses was also found to correlate with an increased attention to probabilities. Our data show that individual differences in brain activity and differences in information search processes are closely linked. PMID:27679814
Resolution analysis of marine seismic full waveform data by Bayesian inversion
NASA Astrophysics Data System (ADS)
Ray, A.; Sekar, A.; Hoversten, G. M.; Albertin, U.
2015-12-01
The Bayesian posterior density function (PDF) of earth models that fit full waveform seismic data convey information on the uncertainty with which the elastic model parameters are resolved. In this work, we apply the trans-dimensional reversible jump Markov Chain Monte Carlo method (RJ-MCMC) for the 1D inversion of noisy synthetic full-waveform seismic data in the frequency-wavenumber domain. While seismic full waveform inversion (FWI) is a powerful method for characterizing subsurface elastic parameters, the uncertainty in the inverted models has remained poorly known, if at all and is highly initial model dependent. The Bayesian method we use is trans-dimensional in that the number of model layers is not fixed, and flexible such that the layer boundaries are free to move around. The resulting parameterization does not require regularization to stabilize the inversion. Depth resolution is traded off with the number of layers, providing an estimate of uncertainty in elastic parameters (compressional and shear velocities Vp and Vs as well as density) with depth. We find that in the absence of additional constraints, Bayesian inversion can result in a wide range of posterior PDFs on Vp, Vs and density. These PDFs range from being clustered around the true model, to those that contain little resolution of any particular features other than those in the near surface, depending on the particular data and target geometry. We present results for a suite of different frequencies and offset ranges, examining the differences in the posterior model densities thus derived. Though these results are for a 1D earth, they are applicable to areas with simple, layered geology and provide valuable insight into the resolving capabilities of FWI, as well as highlight the challenges in solving a highly non-linear problem. The RJ-MCMC method also presents a tantalizing possibility for extension to 2D and 3D Bayesian inversion of full waveform seismic data in the future, as it objectively tackles the problem of model selection (i.e., the number of layers or cells for parameterization), which could ease the computational burden of evaluating forward models with many parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conn, A. R.; Parker, Q. A.; Zucker, D. B.
In 'A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (Part I)', a new technique was introduced for obtaining distances using the tip of the red giant branch (TRGB) standard candle. Here we describe a useful complement to the technique with the potential to further reduce the uncertainty in our distance measurements by incorporating a matched-filter weighting scheme into the model likelihood calculations. In this scheme, stars are weighted according to their probability of being true object members. We then re-test our modified algorithm using random-realization artificial data to verify the validity of the generated posterior probability distributionsmore » (PPDs) and proceed to apply the algorithm to the satellite system of M31, culminating in a three-dimensional view of the system. Further to the distributions thus obtained, we apply a satellite-specific prior on the satellite distances to weight the resulting distance posterior distributions, based on the halo density profile. Thus in a single publication, using a single method, a comprehensive coverage of the distances to the companion galaxies of M31 is presented, encompassing the dwarf spheroidals Andromedas I-III, V, IX-XXVII, and XXX along with NGC 147, NGC 185, M33, and M31 itself. Of these, the distances to Andromedas XXIV-XXVII and Andromeda XXX have never before been derived using the TRGB. Object distances are determined from high-resolution tip magnitude posterior distributions generated using the Markov Chain Monte Carlo technique and associated sampling of these distributions to take into account uncertainties in foreground extinction and the absolute magnitude of the TRGB as well as photometric errors. The distance PPDs obtained for each object both with and without the aforementioned prior are made available to the reader in tabular form. The large object coverage takes advantage of the unprecedented size and photometric depth of the Pan-Andromeda Archaeological Survey. Finally, a preliminary investigation into the satellite density distribution within the halo is made using the obtained distance distributions. For simplicity, this investigation assumes a single power law for the density as a function of radius, with the slope of this power law examined for several subsets of the entire satellite sample.« less
Macular Hole Development After Vitrectomy for Floaters: A Case Report.
Appeltans, Andrea; Mura, Marco; Bamonte, Giulio
2017-12-01
The purpose of this report is to describe a case of macular hole development after vitrectomy for floaters with induction of posterior vitreous detachment. A 44-year-old otherwise healthy man presented with visually debilitating floaters in his right eye; these had been present for more than 2 years. Preoperative examination was unremarkable in both eyes, apart from some degree of vitreous degeneration in the right eye. Preoperative visual acuity was 20/20 bilaterally. A 25-gauge transconjunctival sutureless pars plana complete vitrectomy with induction of posterior vitreous detachment was performed in the right eye. Upon examination 1 month after surgery, a small full-thickness macular hole was detected in the right eye. Visual acuity was diminished to 20/80. The macular hole was closed after a second vitrectomy with internal limiting membrane peeling and gas tamponade. Macular hole development should be listed as a possible complication of vitrectomy for visually debilitating floaters when a posterior vitreous detachment is induced during surgery.
Action understanding in the superior temporal sulcus region.
Wyk, Brent C Vander; Hudac, Caitlin M; Carter, Elizabeth J; Sobel, David M; Pelphrey, Kevin A
2009-06-01
The posterior superior temporal sulcus (STS) region plays an important role in the perception of social acts, although its full role has not been completely clarified. This functional magnetic resonance imaging experiment examined activity in the STS region as participants viewed actions that were congruent or incongruent with intentions established by a previous emotional context. Participants viewed an actress express either a positive or a negative emotion toward one of two objects and then subsequently pick up one of them. If the object that was picked up had received positive regard, or if the object that was not picked up had received negative regard, the action was congruent; otherwise, the action was incongruent. Activity in the right posterior STS region was sensitive to the congruency between the action and the actress's emotional expression (i.e., STS activity was greater on incongruent than on congruent trials). These findings suggest that the posterior STS represents not only biological motion, but also how another person's motion is related to his or her intentions.
Estimating probabilities of reservoir storage for the upper Delaware River basin
Hirsch, Robert M.
1981-01-01
A technique for estimating conditional probabilities of reservoir system storage is described and applied to the upper Delaware River Basin. The results indicate that there is a 73 percent probability that the three major New York City reservoirs (Pepacton, Cannonsville, and Neversink) would be full by June 1, 1981, and only a 9 percent probability that storage would return to the ' drought warning ' sector of the operations curve sometime in the next year. In contrast, if restrictions are lifted and there is an immediate return to normal operating policies, the probability of the reservoir system being full by June 1 is 37 percent and the probability that storage would return to the ' drought warning ' sector in the next year is 30 percent. (USGS)
Seismic waveform inversion using neural networks
NASA Astrophysics Data System (ADS)
De Wit, R. W.; Trampert, J.
2012-12-01
Full waveform tomography aims to extract all available information on Earth structure and seismic sources from seismograms. The strongly non-linear nature of this inverse problem is often addressed through simplifying assumptions for the physical theory or data selection, thus potentially neglecting valuable information. Furthermore, the assessment of the quality of the inferred model is often lacking. This calls for the development of methods that fully appreciate the non-linear nature of the inverse problem, whilst providing a quantification of the uncertainties in the final model. We propose to invert seismic waveforms in a fully non-linear way by using artificial neural networks. Neural networks can be viewed as powerful and flexible non-linear filters. They are very common in speech, handwriting and pattern recognition. Mixture Density Networks (MDN) allow us to obtain marginal posterior probability density functions (pdfs) of all model parameters, conditioned on the data. An MDN can approximate an arbitrary conditional pdf as a linear combination of Gaussian kernels. Seismograms serve as input, Earth structure parameters are the so-called targets and network training aims to learn the relationship between input and targets. The network is trained on a large synthetic data set, which we construct by drawing many random Earth models from a prior model pdf and solving the forward problem for each of these models, thus generating synthetic seismograms. As a first step, we aim to construct a 1D Earth model. Training sets are constructed using the Mineos package, which computes synthetic seismograms in a spherically symmetric non-rotating Earth by summing normal modes. We train a network on the body waveforms present in these seismograms. Once the network has been trained, it can be presented with new unseen input data, in our case the body waves in real seismograms. We thus obtain the posterior pdf which represents our final state of knowledge given the information in the training set and the real data.
Lower Lateral Cartilage Cephalic Malposition: An Over-Diagnosed Entity.
Hafezi, Farhad; Naghibzadeh, Bijan; Kazemi Ashtiani, Abbas
2018-06-01
Lower lateral cartilage malposition is represented by anterior convexity of the lower lateral cartilage (LLC) dome with posterior pinch, as defined by Sheen and Constantian. This anatomic variation consists of cephalic, or upward and inward, rotation of lateral crura, particularly in bulbous tip patients. In most cases, "bulbous pinch" LLC is positioned toward the medial canthus, not laterally, so it is referred to as cephalic displacement. Accordingly, it is recommended to caudally displace cartilage in the majority of rhinoplasty cases in which variation is seen. The purpose of this paper is to measure the exact angle of lateral crura with fixed reference points on the face. We drew and marked LLC contours and vertical/horizontal lines in 40 consecutive rhinoplasty cases. We then divided them into two groups: (1) bulbous pinch and (2) flat LLCs. The right- and left-sided LLC angles to midline and horizontal lines were measured and compared to assess whether there was any significant difference between the two subgroups. There was no significant difference between the angles of LLC rotation in the bulbous and flat LLCs groups, measured both vertically and horizontally. Based on our findings, although cephalic malposition of LLCs may be present in some patients but in the majority of cases the etiology of nasal lateral wall pinching is not cephalic displacement of lateral crura but most probably is due, rather, to severe convexity of the posterior and lateral crura. According to our findings, cephalic malposition is an uncommon anatomic variation of LLCs that has been reported at high frequency (60-70% of their rhinoplasty cases). This finding may help to correct this deformity into a normal anatomic configuration. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Ide, Kazuki; Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi
2017-02-01
The aim of this study is to analyze the data obtained from a randomized trial on the prevention of influenza by gargling with green tea, which gave nonsignificant results based on frequentist approaches, by using Bayesian approaches. The posterior proportion, with 95% credible interval (CrI), of influenza in each group was calculated. The Bayesian index θ is the probability that a hypothesis is true. In this case, θ is the probability that the hypothesis that green tea gargling reduced influenza compared with water gargling is true. Univariate and multivariate logistic regression analyses were also performed by using the Markov chain Monte Carlo method. The full analysis set included 747 participants. During the study period, influenza occurred in 44 participants (5.9%). The difference between the two independent binominal proportions was -0.019 (95% CrI, -0.054 to 0.015; θ = 0.87). The partial regression coefficients in the univariate analysis were -0.35 (95% CrI, -1.00 to 0.24) with use of a uniform prior and -0.34 (95% CrI, -0.96 to 0.27) with use of a Jeffreys prior. In the multivariate analysis, the values were -0.37 (95% CrI, -0.96 to 0.30) and -0.36 (95% CrI, -1.03 to 0.21), respectively. The difference between the two independent binominal proportions was less than 0, and θ was greater than 0.85. Therefore, green tea gargling may slightly reduce influenza compared with water gargling. This analysis suggests that green tea gargling can be an additional preventive measure for use with other pharmaceutical and nonpharmaceutical measures and indicates the need for additional studies to confirm the effect of green tea gargling.
Dhandapani, Sivashanmugam; Srinivasan, Anirudh
2016-01-01
Triple spinal dysraphism is extremely rare. There are published reports of multiple discrete neural tube defects with intervening normal segments that are explained by the multisite closure theory of primary neurulation, having an association with Chiari malformation Type II consistent with the unified theory of McLone. The authors report on a 1-year-old child with contiguous myelomeningocele and lipomyelomeningocele centered on Type I split cord malformation with Chiari malformation Type II and hydrocephalus. This composite anomaly is probably due to select abnormalities of the neurenteric canal during gastrulation, with a contiguous cascading impact on both dysjunction of the neural tube and closure of the neuropore, resulting in a small posterior fossa, probably bringing the unified theory of McLone closer to the unified theory of Pang.
NASA Astrophysics Data System (ADS)
Jiang, Runqing; Barnett, Rob B.; Chow, James C. L.; Chen, Jeff Z. Y.
2007-03-01
The aim of this study is to investigate the effects of internal organ motion on IMRT treatment planning of prostate patients using a spatial dose gradient and probability density function. Spatial dose distributions were generated from a Pinnacle3 planning system using a co-planar, five-field intensity modulated radiation therapy (IMRT) technique. Five plans were created for each patient using equally spaced beams but shifting the angular displacement of the beam by 15° increments. Dose profiles taken through the isocentre in anterior-posterior (A-P), right-left (R-L) and superior-inferior (S-I) directions for IMRT plans were analysed by exporting RTOG file data from Pinnacle. The convolution of the 'static' dose distribution D0(x, y, z) and probability density function (PDF), denoted as P(x, y, z), was used to analyse the combined effect of repositioning error and internal organ motion. Organ motion leads to an enlarged beam penumbra. The amount of percentage mean dose deviation (PMDD) depends on the dose gradient and organ motion probability density function. Organ motion dose sensitivity was defined by the rate of change in PMDD with standard deviation of motion PDF and was found to increase with the maximum dose gradient in anterior, posterior, left and right directions. Due to common inferior and superior field borders of the field segments, the sharpest dose gradient will occur in the inferior or both superior and inferior penumbrae. Thus, prostate motion in the S-I direction produces the highest dose difference. The PMDD is within 2.5% when standard deviation is less than 5 mm, but the PMDD is over 2.5% in the inferior direction when standard deviation is higher than 5 mm in the inferior direction. Verification of prostate organ motion in the inferior directions is essential. The margin of the planning target volume (PTV) significantly impacts on the confidence of tumour control probability (TCP) and level of normal tissue complication probability (NTCP). Smaller margins help to reduce the dose to normal tissues, but may compromise the dose coverage of the PTV. Lower rectal NTCP can be achieved by either a smaller margin or a steeper dose gradient between PTV and rectum. With the same DVH control points, the rectum has lower complication in the seven-beam technique used in this study because of the steeper dose gradient between the target volume and rectum. The relationship between dose gradient and rectal complication can be used to evaluate IMRT treatment planning. The dose gradient analysis is a powerful tool to improve IMRT treatment plans and can be used for QA checking of treatment plans for prostate patients.
Jiang, Runqing; Barnett, Rob B; Chow, James C L; Chen, Jeff Z Y
2007-03-07
The aim of this study is to investigate the effects of internal organ motion on IMRT treatment planning of prostate patients using a spatial dose gradient and probability density function. Spatial dose distributions were generated from a Pinnacle3 planning system using a co-planar, five-field intensity modulated radiation therapy (IMRT) technique. Five plans were created for each patient using equally spaced beams but shifting the angular displacement of the beam by 15 degree increments. Dose profiles taken through the isocentre in anterior-posterior (A-P), right-left (R-L) and superior-inferior (S-I) directions for IMRT plans were analysed by exporting RTOG file data from Pinnacle. The convolution of the 'static' dose distribution D0(x, y, z) and probability density function (PDF), denoted as P(x, y, z), was used to analyse the combined effect of repositioning error and internal organ motion. Organ motion leads to an enlarged beam penumbra. The amount of percentage mean dose deviation (PMDD) depends on the dose gradient and organ motion probability density function. Organ motion dose sensitivity was defined by the rate of change in PMDD with standard deviation of motion PDF and was found to increase with the maximum dose gradient in anterior, posterior, left and right directions. Due to common inferior and superior field borders of the field segments, the sharpest dose gradient will occur in the inferior or both superior and inferior penumbrae. Thus, prostate motion in the S-I direction produces the highest dose difference. The PMDD is within 2.5% when standard deviation is less than 5 mm, but the PMDD is over 2.5% in the inferior direction when standard deviation is higher than 5 mm in the inferior direction. Verification of prostate organ motion in the inferior directions is essential. The margin of the planning target volume (PTV) significantly impacts on the confidence of tumour control probability (TCP) and level of normal tissue complication probability (NTCP). Smaller margins help to reduce the dose to normal tissues, but may compromise the dose coverage of the PTV. Lower rectal NTCP can be achieved by either a smaller margin or a steeper dose gradient between PTV and rectum. With the same DVH control points, the rectum has lower complication in the seven-beam technique used in this study because of the steeper dose gradient between the target volume and rectum. The relationship between dose gradient and rectal complication can be used to evaluate IMRT treatment planning. The dose gradient analysis is a powerful tool to improve IMRT treatment plans and can be used for QA checking of treatment plans for prostate patients.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
Statistical physics of medical diagnostics: Study of a probabilistic model.
Mashaghi, Alireza; Ramezanpour, Abolfazl
2018-03-01
We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.
Statistical physics of medical diagnostics: Study of a probabilistic model
NASA Astrophysics Data System (ADS)
Mashaghi, Alireza; Ramezanpour, Abolfazl
2018-03-01
We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.
Daniel Goodman’s empirical approach to Bayesian statistics
Gerrodette, Tim; Ward, Eric; Taylor, Rebecca L.; Schwarz, Lisa K.; Eguchi, Tomoharu; Wade, Paul; Himes Boor, Gina
2016-01-01
Bayesian statistics, in contrast to classical statistics, uses probability to represent uncertainty about the state of knowledge. Bayesian statistics has often been associated with the idea that knowledge is subjective and that a probability distribution represents a personal degree of belief. Dr. Daniel Goodman considered this viewpoint problematic for issues of public policy. He sought to ground his Bayesian approach in data, and advocated the construction of a prior as an empirical histogram of “similar” cases. In this way, the posterior distribution that results from a Bayesian analysis combined comparable previous data with case-specific current data, using Bayes’ formula. Goodman championed such a data-based approach, but he acknowledged that it was difficult in practice. If based on a true representation of our knowledge and uncertainty, Goodman argued that risk assessment and decision-making could be an exact science, despite the uncertainties. In his view, Bayesian statistics is a critical component of this science because a Bayesian analysis produces the probabilities of future outcomes. Indeed, Goodman maintained that the Bayesian machinery, following the rules of conditional probability, offered the best legitimate inference from available data. We give an example of an informative prior in a recent study of Steller sea lion spatial use patterns in Alaska.
Verhaeghe, C; Parot-Schinkel, E; Bouet, P E; Madzou, S; Biquard, F; Gillard, P; Descamps, P; Legendre, G
2018-02-14
The frequency of posterior presentations (occiput of the fetus towards the sacrum of the mother) in labor is approximately 20% and, of this, 5% remain posterior until the end of labor. These posterior presentations are associated with higher rates of cesarean section and instrumental delivery. Manual rotation of a posterior position in order to rotate the fetus to an anterior position has been proposed in order to reduce the rate of instrumental fetal delivery. No randomized study has compared the efficacy of this procedure to expectant management. We therefore propose a monocentric, interventional, randomized, prospective study to show the superiority of vaginal delivery rates using the manual rotation of the posterior position at full dilation over expectant management. Ultrasound imaging of the presentation will be performed at full dilation on all the singleton pregnancies for which a clinical suspicion of a posterior position was raised at more than 37 weeks' gestation (WG). In the event of an ultrasound confirming a posterior position, the patient will be randomized into an experimental group (manual rotation) or a control group (expectative management with no rotation). For a power of 90% and the hypothesis that vaginal deliveries will increase by 20%, (10% of patients lost to follow-up) 238 patients will need to be included in the study. The primary endpoint will be the rate of spontaneous vaginal deliveries (expected rate without rotation: 60%). The secondary endpoints will be the rate of fetal extractions (cesarean or instrumental) and the maternal and fetal morbidity and mortality rates. The intent-to-treat study will be conducted over 24 months. Recruitment started in February 2017. To achieve the primary objective, we will perform a test comparing the number of spontaneous vaginal deliveries in the two groups using Pearson's chi-squared test (provided that the conditions for using this test are satisfactory in terms of numbers). In the event that this test cannot be performed, we will use Fisher's exact test. Given that the efficacy of manual rotation has not been proven with a high level of evidence, the practice of this technique is not systematically recommended by scholarly societies and is, therefore, rarely performed by obstetric gynecologists. If our hypothesis regarding the superiority of manual rotation is confirmed, our study will help change delivery practices in cases of posterior fetal position. An increase in the rates of vaginal delivery will help decrease the short- and long-term rates of morbidity and mortality following cesarean section. Manual rotation is a simple and effective method with a success rate of almost 90%. Several preliminary studies have shown that manual rotation is associated with reduced rates for fetal extraction and maternal complications: Shaffer has shown that the cesarean section rate is lower in patients for whom a manual rotation is performed successfully (2%) with a 9% rate of cesarean sections when manual rotation is performed versus 41% when it is not performed. Le Ray has shown that manual rotation significantly reduces vaginal delivery rates via fetal extraction (23.2% vs 38.7%, p < 0.01). However, manual rotation is not systematically performed due to the absence of proof of its efficacy in retrospective studies and quasi-experimental before/after studies. ClinicalTrials.gov, Identifier: NCT03009435 . Registered on 30 December 2016.
Fang, Chao-Hua; Chang, Chia-Ming; Lai, Yu-Shu; Chen, Wen-Chuan; Song, Da-Yong; McClean, Colin J; Kao, Hao-Yuan; Qu, Tie-Bing; Cheng, Cheng-Kung
2015-11-01
Excellent clinical and kinematical performance is commonly reported after medial pivot knee arthroplasty. However, there is conflicting evidence as to whether the posterior cruciate ligament should be retained. This study simulated how the posterior cruciate ligament, post-cam mechanism and medial tibial insert morphology may affect postoperative kinematics. After the computational intact knee model was validated according to the motion of a normal knee, four TKA models were built based on a medial pivot prosthesis; PS type, modified PS type, CR type with PCL retained and CR type with PCL sacrificed. Anteroposterior translation and axial rotation of femoral condyles on the tibia during 0°-135° knee flexion were analyzed. There was no significant difference in kinematics between the intact knee model and reported data for a normal knee. In all TKA models, normal motion was almost fully restored, except for the CR type with PCL sacrificed. Sacrificing the PCL produced paradoxical anterior femoral translation and tibial external rotation during full flexion. Either the posterior cruciate ligament or post-cam mechanism is necessary for medial pivot prostheses to regain normal kinematics after total knee arthroplasty. The morphology of medial tibial insert was also shown to produce a small but noticeable effect on knee kinematics. V.
Chemtob, Claude M; Pat-Horenczyk, Ruth; Madan, Anita; Pitman, Seth R; Wang, Yanping; Doppelt, Osnat; Burns, Kelly Dugan; Abramovitz, Robert; Brom, Daniel
2011-12-01
In this study, we examined the relationships among terrorism exposure, functional impairment, suicidal ideation, and probable partial or full posttraumatic stress disorder (PTSD) from exposure to terrorism in adolescents continuously exposed to this threat in Israel. A convenience sample of 2,094 students, aged 12 to 18, was drawn from 10 Israeli secondary schools. In terms of demographic factors, older age was associated with increased risk for suicidal ideation, OR = 1.33, 95% CI [1.09, 1.62], p < .01, but was protective against probable partial or full PTSD, OR = 0.72, 95% CI [0.54, 0.95], p < .05; female gender was associated with greater likelihood of probable partial or full PTSD, OR = 1.57, 95% CI [1.02, 2.40], p < .05. Exposure to trauma due to terrorism was associated with increased risk for each of the measured outcomes including probable partial or full PTSD, functional impairment, and suicidal ideation. When age, gender, level of exposure to terrorism, probable partial or full PTSD, and functional impairment were examined together, only terrorism exposure and functional impairment were associated with suicidal ideation. This study underscores the importance and feasibility of examining exposure to terrorism and functional impairment as risk factors for suicidal ideation. Copyright © 2011 International Society for Traumatic Stress Studies.
Sex estimation of the tibia in modern Turkish: A computed tomography study.
Ekizoglu, Oguzhan; Er, Ali; Bozdag, Mustafa; Akcaoglu, Mustafa; Can, Ismail Ozgur; García-Donas, Julieta G; Kranioti, Elena F
2016-11-01
The utilization of computed tomography is beneficial for the analysis of skeletal remains and it has important advantages for anthropometric studies. The present study investigated morphometry of left tibia using CT images of a contemporary Turkish population. Seven parameters were measured on 203 individuals (124 males and 79 females) within the 19-92-years age group. The first objective of this study was to provide population-specific sex estimation equations for the contemporary Turkish population based on CT images. A second objective was to test the sex estimation formulae on Southern Europeans by Kranioti and Apostol (2015). Univariate discriminant functions resulted in classification accuracy that ranged from 66 to 86%. The best single variable was found to be upper epiphyseal breadth (86%) followed by lower epiphyseal breadth (85%). Multivariate discriminant functions resulted in classification accuracy for cross-validated data ranged from 79 to 86%. Applying the multivariate sex estimation formulae on Southern Europeans (SE) by Kranioti and Apostol in our sample resulted in very high classification accuracy ranging from 81 to 88%. In addition, 35.5-47% of the total Turkish sample is correctly classified with over 95% posterior probability, which is actually higher than the one reported for the original sample (25-43%). We conclude that the tibia is a very useful bone for sex estimation in the contemporary Turkish population. Moreover, our test results support the hypothesis that the SE formulae are sufficient for the contemporary Turkish population and they can be used safely for criminal investigations when posterior probabilities are over 95%. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Particle Filter with State Permutations for Solving Image Jigsaw Puzzles
Yang, Xingwei; Adluru, Nagesh; Latecki, Longin Jan
2016-01-01
We deal with an image jigsaw puzzle problem, which is defined as reconstructing an image from a set of square and non-overlapping image patches. It is known that a general instance of this problem is NP-complete, and it is also challenging for humans, since in the considered setting the original image is not given. Recently a graphical model has been proposed to solve this and related problems. The target label probability function is then maximized using loopy belief propagation. We also formulate the problem as maximizing a label probability function and use exactly the same pairwise potentials. Our main contribution is a novel inference approach in the sampling framework of Particle Filter (PF). Usually in the PF framework it is assumed that the observations arrive sequentially, e.g., the observations are naturally ordered by their time stamps in the tracking scenario. Based on this assumption, the posterior density over the corresponding hidden states is estimated. In the jigsaw puzzle problem all observations (puzzle pieces) are given at once without any particular order. Therefore, we relax the assumption of having ordered observations and extend the PF framework to estimate the posterior density by exploring different orders of observations and selecting the most informative permutations of observations. This significantly broadens the scope of applications of the PF inference. Our experimental results demonstrate that the proposed inference framework significantly outperforms the loopy belief propagation in solving the image jigsaw puzzle problem. In particular, the extended PF inference triples the accuracy of the label assignment compared to that using loopy belief propagation. PMID:27795660
GLISTR: Glioma Image Segmentation and Registration
Pohl, Kilian M.; Bilello, Michel; Cirillo, Luigi; Biros, George; Melhem, Elias R.; Davatzikos, Christos
2015-01-01
We present a generative approach for simultaneously registering a probabilistic atlas of a healthy population to brain magnetic resonance (MR) scans showing glioma and segmenting the scans into tumor as well as healthy tissue labels. The proposed method is based on the expectation maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the original atlas into one with tumor and edema adapted to best match a given set of patient’s images. The modified atlas is registered into the patient space and utilized for estimating the posterior probabilities of various tissue labels. EM iteratively refines the estimates of the posterior probabilities of tissue labels, the deformation field and the tumor growth model parameters. Hence, in addition to segmentation, the proposed method results in atlas registration and a low-dimensional description of the patient scans through estimation of tumor model parameters. We validate the method by automatically segmenting 10 MR scans and comparing the results to those produced by clinical experts and two state-of-the-art methods. The resulting segmentations of tumor and edema outperform the results of the reference methods, and achieve a similar accuracy from a second human rater. We additionally apply the method to 122 patients scans and report the estimated tumor model parameters and their relations with segmentation and registration results. Based on the results from this patient population, we construct a statistical atlas of the glioma by inverting the estimated deformation fields to warp the tumor segmentations of patients scans into a common space. PMID:22907965
Optimal observation network design for conceptual model discrimination and uncertainty reduction
NASA Astrophysics Data System (ADS)
Pham, Hai V.; Tsai, Frank T.-C.
2016-02-01
This study expands the Box-Hill discrimination function to design an optimal observation network to discriminate conceptual models and, in turn, identify a most favored model. The Box-Hill discrimination function measures the expected decrease in Shannon entropy (for model identification) before and after the optimal design for one additional observation. This study modifies the discrimination function to account for multiple future observations that are assumed spatiotemporally independent and Gaussian-distributed. Bayesian model averaging (BMA) is used to incorporate existing observation data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. In addition, the BMA method is adopted to predict future observation data in a statistical sense. The design goal is to find optimal locations and least data via maximizing the Box-Hill discrimination function value subject to a posterior model probability threshold. The optimal observation network design is illustrated using a groundwater study in Baton Rouge, Louisiana, to collect additional groundwater heads from USGS wells. The sources of uncertainty creating multiple groundwater models are geological architecture, boundary condition, and fault permeability architecture. Impacts of considering homoscedastic and heteroscedastic future observation data and the sources of uncertainties on potential observation areas are analyzed. Results show that heteroscedasticity should be considered in the design procedure to account for various sources of future observation uncertainty. After the optimal design is obtained and the corresponding data are collected for model updating, total variances of head predictions can be significantly reduced by identifying a model with a superior posterior model probability.
Multiple Chronic Conditions and Hospitalizations Among Recipients of Long-Term Services and Supports
Van Cleave, Janet H.; Egleston, Brian L.; Abbott, Katherine M.; Hirschman, Karen B.; Rao, Aditi; Naylor, Mary D.
2016-01-01
Background Among older adults receiving long term-services and supports (LTSS), debilitating hospitalizations is a pervasive clinical and research problem. Multiple chronic conditions (MCC) are prevalent in LTSS recipients. However, the combination of MCC and diseases associated with hospitalizations of LTSS recipients is unclear. Objective The purpose of this analysis was to determine the association between classes of MCC in newly enrolled LTSS recipients and the number of hospitalizations over a one-year period following enrollment. Methods This report is based on secondary analysis of extant data from a longitudinal cohort study of 470 new recipients of LTSS, ages 60 years and older, receiving services in assisted living facilities, nursing homes, or through home- and community-based services. Using baseline chronic conditions reported in medical records, latent class analysis (LCA) was used to identify classes of MCC and posterior probabilities of membership in each class. Poisson regressions were used to estimate the relative ratio between posterior probabilities of class membership and number of hospitalizations during the 3 month period prior to the start of LTSS (baseline) and then every three months forward through 12 months. Results Three latent MCC-based classes named Cardiopulmonary, Cerebrovascular/Paralysis, and All Other Conditions were identified. The Cardiopulmonary class was associated with elevated numbers of hospitalization compared to the All Other Conditions class (relative ratio [RR] = 1.88, 95% CI [1.33, 2.65], p < .001). Conclusion Older LTSS recipients with a combination of MCCs that includes cardiopulmonary conditions have increased risk for hospitalization. PMID:27801713
Altermatt, Anna; Gaetano, Laura; Magon, Stefano; Häring, Dieter A; Tomic, Davorka; Wuerfel, Jens; Radue, Ernst-Wilhelm; Kappos, Ludwig; Sprenger, Till
2018-05-29
There is a limited correlation between white matter (WM) lesion load as determined by magnetic resonance imaging and disability in multiple sclerosis (MS). The reasons for this so-called clinico-radiological paradox are diverse and may, at least partly, relate to the fact that not just the overall lesion burden, but also the exact anatomical location of lesions predict the severity and type of disability. We aimed at studying the relationship between lesion distribution and disability using a voxel-based lesion probability mapping approach in a very large dataset of MS patients. T2-weighted lesion masks of 2348 relapsing-remitting MS patients were spatially normalized to standard stereotaxic space by non-linear registration. Relations between supratentorial WM lesion locations and disability measures were assessed using a non-parametric ANCOVA (Expanded Disability Status Scale [EDSS]; Multiple Sclerosis Functional Composite, and subscores; Modified Fatigue Impact Scale) or multinomial ordinal logistic regression (EDSS functional subscores). Data from 1907 (81%) patients were included in the analysis because of successful registration. The lesion mapping showed similar areas to be associated with the different disability scales: periventricular regions in temporal, frontal, and limbic lobes were predictive, mainly affecting the posterior thalamic radiation, the anterior, posterior, and superior parts of the corona radiata. In summary, significant associations between lesion location and clinical scores were found in periventricular areas. Such lesion clusters appear to be associated with impairment of different physical and cognitive abilities, probably because they affect commissural and long projection fibers, which are relevant WM pathways supporting many different brain functions.
Bayesian Model Selection in Geophysics: The evidence
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2016-12-01
Bayesian inference has found widespread application and use in science and engineering to reconcile Earth system models with data, including prediction in space (interpolation), prediction in time (forecasting), assimilation of observations and deterministic/stochastic model output, and inference of the model parameters. Per Bayes theorem, the posterior probability, , P(H|D), of a hypothesis, H, given the data D, is equivalent to the product of its prior probability, P(H), and likelihood, L(H|D), divided by a normalization constant, P(D). In geophysics, the hypothesis, H, often constitutes a description (parameterization) of the subsurface for some entity of interest (e.g. porosity, moisture content). The normalization constant, P(D), is not required for inference of the subsurface structure, yet of great value for model selection. Unfortunately, it is not particularly easy to estimate P(D) in practice. Here, I will introduce the various building blocks of a general purpose method which provides robust and unbiased estimates of the evidence, P(D). This method uses multi-dimensional numerical integration of the posterior (parameter) distribution. I will then illustrate this new estimator by application to three competing subsurface models (hypothesis) using GPR travel time data from the South Oyster Bacterial Transport Site, in Virginia, USA. The three subsurface models differ in their treatment of the porosity distribution and use (a) horizontal layering with fixed layer thicknesses, (b) vertical layering with fixed layer thicknesses and (c) a multi-Gaussian field. The results of the new estimator are compared against the brute force Monte Carlo method, and the Laplace-Metropolis method.
Use of ECT in the presence of acute bilateral posterior vitreous detachmanet.
Taye, Tesema; Dobranici, Letitia; Fisher, Mark; Cullum, Sarah
2018-04-01
We describe a case of acute bilateral posterior vitreous detachment (PVD) in a 71-year-old female, which developed during a course of electroconvulsive therapy (ECT) for treatment-resistant depression. The risks and benefits of continuing ECT were assessed and the patient completed the full course of 16 ECT treatments without further ophthalmic complications. As the incidence of PVD increases with age, and ECT is used more frequently in elderly people with depression, we recommend paying attention to ophthalmic symptoms as part of the routine clinical monitoring of ECT side effects. If ophthalmic symptoms occur, the risks and benefits of ECT need to be weighed up including consultation with an ophthalmologist.
Bilateral anterior segment dysgenesis with the presumed Peters' anomaly in a cat.
Park, Sangwan; Kim, Kiwoong; Kim, Youngbeum; Seo, Kangmoon
2018-02-20
A seven-month-old female domestic shorthaired cat was presented for buphthalmos in the right eye and corneal cloudiness in the left eye. Full ophthalmic examinations were performed for both eyes and enucleation was done for the right nonvisual eye. Congenital glaucoma caused by anterior segment dysgenesis was confirmed for the right eye. In the left eye, slit-lamp examination revealed focal corneal edema with several iris strands from iris collarette to the affected posterior corneal surfaces. Circular posterior corneal defect was suggested to be the cause of edema. Goniodysgenesis, additionally, was identified. Taken together, the diagnosis of Peters' anomaly which is a subtype of anterior segment dysgenesis was suggested in the left eye.
[Foster Modification of Full Tendon Transposition of Vertical Rectus Muscles for Sixth Nerve Palsy].
Heede, Santa
2018-04-11
Since 1907 a variety of muscle transposition procedures for the treatment of abducens nerve palsy has been established internationally. Full tendon transposition of the vertical rectus muscle was initially described by O'Connor 1935 and then augmented by Foster 1997 with addition of posterior fixation sutures on the vertical rectus muscle. Full tendon transposition augmented by Foster belongs to the group of the most powerful surgical techniques to improve the abduction. Purpose of this study was to evaluate the results of full tendon vertical rectus transposition augmented with lateral fixation suture for patients with abducens nerve palsy. Full tendon transpositions of vertical rectus muscles augmented with posterior fixation suture was performed in 2014 on five patients with abducens nerve palsy. Two of the patients received Botox injections in the medial rectus muscle: one of them three months after the surgery and another during the surgery. One of the patients had a combined surgery of the horizontal muscles one year before. On three of the patients, who received a pure transposition surgery, the preoperative deviation at the distance (mean: + 56.6 pd; range: + 40 to + 80 pd) was reduced by a mean of 39.6 pd (range 34 to 50 pd), the abduction was improved by a mean of 3 mm (range 2 to 4 mm). The other two patients, who received besides the transposition procedure additional surgeries of the horizontal muscles, the preoperative deviation at the distance (+ 25 and + 126 pd respectively) was reduced by 20 and 81 pd respectively. The abduction was improved by 4 and 8 mm respectively. After surgery two patients developed a vertical deviation with a maximum of 4 pd. None of the patients had complications or signs of anterior segment ischemia. The elevation and/or depression was only marginally affected. There was no diplopia in up- or downgaze. Full tendon transposition of vertical rectus muscles, augmented with lateral posterior fixation suture is a safe and effective treatment method for abducens nerve palsy and in most cases recession of the medial rectus can be avoided. Upgaze and downgaze are affected very slightly. Diverse studies have shown that the risk of anterior segment ischemia is low. Georg Thieme Verlag KG Stuttgart · New York.
Relationship of Tear Size and Location to Fatty Degeneration of the Rotator Cuff
Kim, H. Mike; Dahiya, Nirvikar; Teefey, Sharlene A.; Keener, Jay D.; Galatz, Leesa M.; Yamaguchi, Ken
2010-01-01
Background: Fatty degeneration of the rotator cuff muscles may have detrimental effects on both anatomical and functional outcomes following shoulder surgery. The purpose of this study was to investigate the relationship between tear geometry and muscle fatty degeneration in shoulders with a deficient rotator cuff. Methods: Ultrasonograms of both shoulders of 262 patients were reviewed to assess the type of rotator cuff tear and fatty degeneration in the supraspinatus and infraspinatus muscles. The 251 shoulders with a full-thickness tear underwent further evaluation for tear size and location. The relationship of tear size and location to fatty degeneration of the supraspinatus and infraspinatus muscles was investigated with use of statistical comparisons and regression models. Results: Fatty degeneration was found almost exclusively in shoulders with a full-thickness rotator cuff tear. Of the 251 shoulders with a full-thickness tear, eighty-seven (34.7%) had fatty degeneration in either the supraspinatus or infraspinatus, or both. Eighty-two (32.7%) of the 251 full-thickness tears had a distance of 0 mm between the biceps tendon and anterior margin of the tear. Ninety percent of the full-thickness tears with fatty degeneration in both muscles had a distance of 0 mm posterior from the biceps, whereas only 9% of those without fatty degeneration had a distance of 0 mm. Tears with fatty degeneration had significantly greater width and length than those without fatty degeneration (p < 0.0001). Tears with fatty degeneration had a significantly shorter distance posterior from the biceps than those without fatty degeneration (p < 0.0001). The distance posterior from the biceps was found to be the most important predictor for supraspinatus fatty degeneration, whereas tear width and length were found to be the most important predictors for infraspinatus fatty degeneration. Conclusions: Fatty degeneration of the rotator cuff muscles is closely associated with tear size and location. The finding of this study suggests that the integrity of the anterior supraspinatus tendon is important to the development of fatty degeneration. Patients with full-thickness tears that extend through this area may benefit from earlier surgical intervention if fatty degeneration has not already occurred. Additionally, the findings suggest the importance of secure fixation and healing of the anterior aspect of the supraspinatus with surgical repair. PMID:20360505
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Gismervik, Sigmund Ø; Drogset, Jon O; Granviken, Fredrik; Rø, Magne; Leivseth, Gunnar
2017-01-25
Physical examination tests of the shoulder (PETS) are clinical examination maneuvers designed to aid the assessment of shoulder complaints. Despite more than 180 PETS described in the literature, evidence of their validity and usefulness in diagnosing the shoulder is questioned. This meta-analysis aims to use diagnostic odds ratio (DOR) to evaluate how much PETS shift overall probability and to rank the test performance of single PETS in order to aid the clinician's choice of which tests to use. This study adheres to the principles outlined in the Cochrane guidelines and the PRISMA statement. A fixed effect model was used to assess the overall diagnostic validity of PETS by pooling DOR for different PETS with similar biomechanical rationale when possible. Single PETS were assessed and ranked by DOR. Clinical performance was assessed by sensitivity, specificity, accuracy and likelihood ratio. Six thousand nine-hundred abstracts and 202 full-text articles were assessed for eligibility; 20 articles were eligible and data from 11 articles could be included in the meta-analysis. All PETS for SLAP (superior labral anterior posterior) lesions pooled gave a DOR of 1.38 [1.13, 1.69]. The Supraspinatus test for any full thickness rotator cuff tear obtained the highest DOR of 9.24 (sensitivity was 0.74, specificity 0.77). Compression-Rotation test obtained the highest DOR (6.36) among single PETS for SLAP lesions (sensitivity 0.43, specificity 0.89) and Hawkins test obtained the highest DOR (2.86) for impingement syndrome (sensitivity 0.58, specificity 0.67). No single PETS showed superior clinical test performance. The clinical performance of single PETS is limited. However, when the different PETS for SLAP lesions were pooled, we found a statistical significant change in post-test probability indicating an overall statistical validity. We suggest that clinicians choose their PETS among those with the highest pooled DOR and to assess validity to their own specific clinical settings, review the inclusion criteria of the included primary studies. We further propose that future studies on the validity of PETS use randomized research designs rather than the accuracy design relying less on well-established gold standard reference tests and efficient treatment options.
Topics in Bayesian Hierarchical Modeling and its Monte Carlo Computations
NASA Astrophysics Data System (ADS)
Tak, Hyung Suk
The first chapter addresses a Beta-Binomial-Logit model that is a Beta-Binomial conjugate hierarchical model with covariate information incorporated via a logistic regression. Various researchers in the literature have unknowingly used improper posterior distributions or have given incorrect statements about posterior propriety because checking posterior propriety can be challenging due to the complicated functional form of a Beta-Binomial-Logit model. We derive data-dependent necessary and sufficient conditions for posterior propriety within a class of hyper-prior distributions that encompass those used in previous studies. Frequency coverage properties of several hyper-prior distributions are also investigated to see when and whether Bayesian interval estimates of random effects meet their nominal confidence levels. The second chapter deals with a time delay estimation problem in astrophysics. When the gravitational field of an intervening galaxy between a quasar and the Earth is strong enough to split light into two or more images, the time delay is defined as the difference between their travel times. The time delay can be used to constrain cosmological parameters and can be inferred from the time series of brightness data of each image. To estimate the time delay, we construct a Gaussian hierarchical model based on a state-space representation for irregularly observed time series generated by a latent continuous-time Ornstein-Uhlenbeck process. Our Bayesian approach jointly infers model parameters via a Gibbs sampler. We also introduce a profile likelihood of the time delay as an approximation of its marginal posterior distribution. The last chapter specifies a repelling-attracting Metropolis algorithm, a new Markov chain Monte Carlo method to explore multi-modal distributions in a simple and fast manner. This algorithm is essentially a Metropolis-Hastings algorithm with a proposal that consists of a downhill move in density that aims to make local modes repelling, followed by an uphill move in density that aims to make local modes attracting. The downhill move is achieved via a reciprocal Metropolis ratio so that the algorithm prefers downward movement. The uphill move does the opposite using the standard Metropolis ratio which prefers upward movement. This down-up movement in density increases the probability of a proposed move to a different mode.
Menon, Mani; Dalela, Deepansh; Jamil, Marcus; Diaz, Mireya; Tallman, Christopher; Abdollah, Firas; Sood, Akshay; Lehtola, Linda; Miller, David; Jeong, Wooju
2018-05-01
We report a 1-year update of functional urinary and sexual recovery, oncologic outcomes and postoperative complications in patients who completed a randomized controlled trial comparing posterior (Retzius sparing) with anterior robot-assisted radical prostatectomy. A total of 120 patients with clinically low-intermediate risk prostate cancer were randomized to undergo robot-assisted radical prostatectomy via the posterior and anterior approach in 60 each. Surgery was performed by a single surgical team at an academic institution. An independent third party ascertained urinary and sexual function outcomes preoperatively, and 3, 6 and 12 months after surgery. Oncologic outcomes consisted of positive surgical margins and biochemical recurrence-free survival. Biochemical recurrence was defined as 2 postoperative prostate specific antigen values of 0.2 ng/ml or greater. Median age of the cohort was 61 years and median followup was 12 months. At 12 months in the anterior vs posterior prostatectomy groups there were no statistically significant differences in the urinary continence rate (0 to 1 security pad per day in 93.3% vs 98.3%, p = 0.09), 24-hour pad weight (median 12 vs 7.5 gm, p = 0.3), erection sufficient for intercourse (69.2% vs 86.5%) or postoperative Sexual Health Inventory for Men score 17 or greater (44.6% vs 44.1%). In the posterior vs anterior prostatectomy groups a nonfocal positive surgical margin was found in 11.7% vs 8.3%, biochemical recurrence-free survival probability was 0.84 vs 0.93 and postoperative complications developed in 18.3% vs 11.7%. Among patients with clinically low-intermediate risk prostate cancer randomized to anterior (Menon) or posterior (Bocciardi) approach robot-assisted radical prostatectomy the differences in urinary continence seen at 3 months were muted at the 12-month followup. Sexual function recovery, postoperative complication and biochemical recurrence rates were comparable 1 year postoperatively. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Smith, Bruce W; Mitchell, Derek G V; Hardin, Michael G; Jazbec, Sandra; Fridberg, Daniel; Blair, R James R; Ernst, Monique
2009-01-15
Economic decision-making involves the weighting of magnitude and probability of potential gains/losses. While previous work has examined the neural systems involved in decision-making, there is a need to understand how the parameters associated with decision-making (e.g., magnitude of expected reward, probability of expected reward and risk) modulate activation within these neural systems. In the current fMRI study, we modified the monetary wheel of fortune (WOF) task [Ernst, M., Nelson, E.E., McClure, E.B., Monk, C.S., Munson, S., Eshel, N., et al. (2004). Choice selection and reward anticipation: an fMRI study. Neuropsychologia 42(12), 1585-1597.] to examine in 25 healthy young adults the neural responses to selections of different reward magnitudes, probabilities, or risks. Selection of high, relative to low, reward magnitude increased activity in insula, amygdala, middle and posterior cingulate cortex, and basal ganglia. Selection of low-probability, as opposed to high-probability reward, increased activity in anterior cingulate cortex, as did selection of risky, relative to safe reward. In summary, decision-making that did not involve conflict, as in the magnitude contrast, recruited structures known to support the coding of reward values, and those that integrate motivational and perceptual information for behavioral responses. In contrast, decision-making under conflict, as in the probability and risk contrasts, engaged the dorsal anterior cingulate cortex whose role in conflict monitoring is well established. However, decision-making under conflict failed to activate the structures that track reward values per se. Thus, the presence of conflict in decision-making seemed to significantly alter the pattern of neural responses to simple rewards. In addition, this paradigm further clarifies the functional specialization of the cingulate cortex in processes of decision-making.
Nongpiur, Monisha E; Haaland, Benjamin A; Perera, Shamira A; Friedman, David S; He, Mingguang; Sakata, Lisandro M; Baskaran, Mani; Aung, Tin
2014-01-01
To develop a score along with an estimated probability of disease for detecting angle closure based on anterior segment optical coherence tomography (AS OCT) imaging. Cross-sectional study. A total of 2047 subjects 50 years of age and older were recruited from a community polyclinic in Singapore. All subjects underwent standardized ocular examination including gonioscopy and imaging by AS OCT (Carl Zeiss Meditec). Customized software (Zhongshan Angle Assessment Program) was used to measure AS OCT parameters. Complete data were available for 1368 subjects. Data from the right eyes were used for analysis. A stepwise logistic regression model with Akaike information criterion was used to generate a score that then was converted to an estimated probability of the presence of gonioscopic angle closure, defined as the inability to visualize the posterior trabecular meshwork for at least 180 degrees on nonindentation gonioscopy. Of the 1368 subjects, 295 (21.6%) had gonioscopic angle closure. The angle closure score was calculated from the shifted linear combination of the AS OCT parameters. The score can be converted to an estimated probability of having angle closure using the relationship: estimated probability = e(score)/(1 + e(score)), where e is the natural exponential. The score performed well in a second independent sample of 178 angle-closure subjects and 301 normal controls, with an area under the receiver operating characteristic curve of 0.94. A score derived from a single AS OCT image, coupled with an estimated probability, provides an objective platform for detection of angle closure. Copyright © 2014 Elsevier Inc. All rights reserved.
Venous Graft for Full-thickness Palpebral Reconstruction
Sanna, Marco Pietro Giuseppe; Maxia, Sara; Esposito, Salvatore; Di Giulio, Stefano; Sartore, Leonardo
2015-01-01
Summary: Full-thickness palpebral reconstruction is a challenge for most surgeons. The complex structures composing the eyelid must be reconstructed with care both for functional and cosmetic reasons. It is possible to find in literature different methods to reconstruct either the anterior or posterior lamella, based on graft or flaps. Most patients involved in this kind of surgery are elderly. It is important to use easy and fast procedures to minimize the length of the operation and its complications. In our department, we used to reconstruct the anterior lamella by means of a Tenzel or a Mustardé flap, whereas for the posterior lamella, we previously utilized a chondromucosal graft, harvested from nasal septum. Thus, these procedures required general anesthesia and long operatory time. We started using a vein graft for the posterior lamella. In this article, we present a series of 9 patients who underwent complex palpebral reconstruction for oncological reasons. In 5 patients (group A), we reconstructed the tarsoconjunctival layer by a chondromucosal graft, whereas in 4 patients (group B), we used a propulsive vein graft. The follow-up was from 10 to 20 months. The patient satisfaction was high, and we had no relapse in the series. In group A, we had more complications, including ectropion and septal perforations, whereas in group B, the operation was faster and we noted minor complications. In conclusion, the use of a propulsive vein to reconstruct the tarsoconjunctival layer was a reliable, safe, and fast procedure that can be considered in complex palpebral reconstructions. PMID:26034651
Zarafshan, Hadi; Khaleghi, Ali; Mohammadi, Mohammad Reza; Moeini, Mahdi; Malmir, Nastaran
2016-01-01
The aim of this study was to investigate electroencephalogram (EEG) dynamics using complexity analysis in children with attention-deficit/hyperactivity disorder (ADHD) compared with healthy control children when performing a cognitive task. Thirty 7-12-year-old children meeting Diagnostic and Statistical Manual of Mental Disorders-Fifth Edition (DSM-5) criteria for ADHD and 30 healthy control children underwent an EEG evaluation during a cognitive task, and Lempel-Ziv complexity (LZC) values were computed. There were no significant differences between ADHD and control groups on age and gender. The mean LZC of the ADHD children was significantly larger than healthy children over the right anterior and right posterior regions during the cognitive performance. In the ADHD group, complexity of the right hemisphere was higher than that of the left hemisphere, but the complexity of the left hemisphere was higher than that of the right hemisphere in the normal group. Although fronto-striatal dysfunction is considered conclusive evidence for the pathophysiology of ADHD, our arithmetic mental task has provided evidence of structural and functional changes in the posterior regions and probably cerebellum in ADHD.
Controlling quantum memory-assisted entropic uncertainty in non-Markovian environments
NASA Astrophysics Data System (ADS)
Zhang, Yanliang; Fang, Maofa; Kang, Guodong; Zhou, Qingping
2018-03-01
Quantum memory-assisted entropic uncertainty relation (QMA EUR) addresses that the lower bound of Maassen and Uffink's entropic uncertainty relation (without quantum memory) can be broken. In this paper, we investigated the dynamical features of QMA EUR in the Markovian and non-Markovian dissipative environments. It is found that dynamical process of QMA EUR is oscillation in non-Markovian environment, and the strong interaction is favorable for suppressing the amount of entropic uncertainty. Furthermore, we presented two schemes by means of prior weak measurement and posterior weak measurement reversal to control the amount of entropic uncertainty of Pauli observables in dissipative environments. The numerical results show that the prior weak measurement can effectively reduce the wave peak values of the QMA-EUA dynamic process in non-Markovian environment for long periods of time, but it is ineffectual on the wave minima of dynamic process. However, the posterior weak measurement reversal has an opposite effects on the dynamic process. Moreover, the success probability entirely depends on the quantum measurement strength. We hope that our proposal could be verified experimentally and might possibly have future applications in quantum information processing.
Bayesian calibration of the Community Land Model using surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi
2014-02-01
We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural errormore » in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.« less
Bayesian analysis of the flutter margin method in aeroelasticity
Khalil, Mohammad; Poirel, Dominique; Sarkar, Abhijit
2016-08-27
A Bayesian statistical framework is presented for Zimmerman and Weissenburger flutter margin method which considers the uncertainties in aeroelastic modal parameters. The proposed methodology overcomes the limitations of the previously developed least-square based estimation technique which relies on the Gaussian approximation of the flutter margin probability density function (pdf). Using the measured free-decay responses at subcritical (preflutter) airspeeds, the joint non-Gaussain posterior pdf of the modal parameters is sampled using the Metropolis–Hastings (MH) Markov chain Monte Carlo (MCMC) algorithm. The posterior MCMC samples of the modal parameters are then used to obtain the flutter margin pdfs and finally the fluttermore » speed pdf. The usefulness of the Bayesian flutter margin method is demonstrated using synthetic data generated from a two-degree-of-freedom pitch-plunge aeroelastic model. The robustness of the statistical framework is demonstrated using different sets of measurement data. In conclusion, it will be shown that the probabilistic (Bayesian) approach reduces the number of test points required in providing a flutter speed estimate for a given accuracy and precision.« less
Municipal mortality due to thyroid cancer in Spain
Lope, Virginia; Pollán, Marina; Pérez-Gómez, Beatriz; Aragonés, Nuria; Ramis, Rebeca; Gómez-Barroso, Diana; López-Abente, Gonzalo
2006-01-01
Background Thyroid cancer is a tumor with a low but growing incidence in Spain. This study sought to depict its spatial municipal mortality pattern, using the classic model proposed by Besag, York and Mollié. Methods It was possible to compile and ascertain the posterior distribution of relative risk on the basis of a single Bayesian spatial model covering all of Spain's 8077 municipal areas. Maps were plotted depicting standardized mortality ratios, smoothed relative risk (RR) estimates, and the posterior probability that RR > 1. Results From 1989 to 1998 a total of 2,538 thyroid cancer deaths were registered in 1,041 municipalities. The highest relative risks were mostly situated in the Canary Islands, the province of Lugo, the east of La Coruña (Corunna) and western areas of Asturias and Orense. Conclusion The observed mortality pattern coincides with areas in Spain where goiter has been declared endemic. The higher frequency in these same areas of undifferentiated, more aggressive carcinomas could be reflected in the mortality figures. Other unknown genetic or environmental factors could also play a role in the etiology of this tumor. PMID:17173668
Patterns of meniscal tears associated with anterior cruciate ligament lesions in athletes.
Binfield, P M; Maffulli, N; King, J B
1993-09-01
In this study, 400 clinically anterior cruciate ligament (ACL) deficient knees were arthroscoped and studied prospectively in the period January 1986 to April 1992. An ACL tear was always confirmed, and 41 per cent of these patients did not have an associated meniscal tear. In 30.25 per cent the lateral meniscus was torn; in 21.25 per cent the ACL tear was associated with a medial meniscus tear, and in the remaining 7 per cent both menisci were torn. The most frequently associated meniscal injury was the bucket handle tear of the medial meniscus (9 per cent), followed by the posterior horn tear of the lateral meniscus, which showed the same frequency as the ragged (or degenerated) tear of the lateral meniscus (6 per cent). The horizontal tear of the posterior part of the lateral meniscus showed a prevalence of 4.3 per cent. This picture is probably dependent on a secondary referral nature of the centre surveyed, in which the average time between injury and arthroscopy was 23.3 months.
Zollanvari, Amin; Dougherty, Edward R
2016-12-01
In classification, prior knowledge is incorporated in a Bayesian framework by assuming that the feature-label distribution belongs to an uncertainty class of feature-label distributions governed by a prior distribution. A posterior distribution is then derived from the prior and the sample data. An optimal Bayesian classifier (OBC) minimizes the expected misclassification error relative to the posterior distribution. From an application perspective, prior construction is critical. The prior distribution is formed by mapping a set of mathematical relations among the features and labels, the prior knowledge, into a distribution governing the probability mass across the uncertainty class. In this paper, we consider prior knowledge in the form of stochastic differential equations (SDEs). We consider a vector SDE in integral form involving a drift vector and dispersion matrix. Having constructed the prior, we develop the optimal Bayesian classifier between two models and examine, via synthetic experiments, the effects of uncertainty in the drift vector and dispersion matrix. We apply the theory to a set of SDEs for the purpose of differentiating the evolutionary history between two species.
NASA Astrophysics Data System (ADS)
Chabdarov, Shamil M.; Nadeev, Adel F.; Chickrin, Dmitry E.; Faizullin, Rashid R.
2011-04-01
In this paper we discuss unconventional detection technique also known as «full resolution receiver». This receiver uses Gaussian probability mixtures for interference structure adaptation. Full resolution receiver is alternative to conventional matched filter receivers in the case of non-Gaussian interferences. For the DS-CDMA forward channel with presence of complex interferences sufficient performance increasing was shown.
Updegrove, Gary F; Armstrong, April D; Mosher, Timothy J; Kim, H Mike
2015-11-01
To characterize the orientation of the normal supraspinatus central tendon and describe the displacement patterns of the central tendon in rotator cuff tears using a magnetic resonance imaging (MRI)-based method. We performed a retrospective MRI and chart review of 183 patients with a rotator cuff tear (cuff tear group), 52 with a labral tear but no rotator cuff tear (labral tear group), and 74 with a normal shoulder (normal group). The orientation of the supraspinatus central tendon relative to the bicipital groove was evaluated based on axial MRI and was numerically represented by the shortest distance from the lateral extension line of the central tendon to the bicipital groove. Tear size, fatty degeneration, and involvement of the anterior supraspinatus were evaluated to identify the factors associated with orientation changes. The mean distance from the bicipital groove to the central tendon line was 0.7 mm and 1.3 mm in the normal group and labral tear group, respectively. Full-thickness cuff tears involving the anterior supraspinatus showed a significantly greater distance (17.7 mm) than those sparing the anterior supraspinatus (4.9 mm, P = .001). Fatty degeneration of the supraspinatus was significantly correlated with the distance (P = .006). Disruption of the anterior supraspinatus and fatty degeneration of the supraspinatus were independent predictors of posterior displacement. The supraspinatus central tendon has a constant orientation toward the bicipital groove in normal shoulders, and the central tendon is frequently displaced posteriorly in full-thickness rotator cuff tears involving the anterior leading edge of the supraspinatus. The degree of posterior displacement is proportional to tear size and severity of fatty degeneration of the supraspinatus muscle. A simple and quick assessment of the central tendon orientation on preoperative MRI can be a useful indicator of tear characteristics, potentially providing insight into the intraoperative repair strategy. Level IV, diagnostic case-control study. Copyright © 2015 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Razavian, Hamid; Kazemi, Shantia; Khazaei, Saber; Jahromi, Maryam Zare
2013-01-01
Background: Successful anesthesia during root canal therapy may be difficult to obtain. Intraosseous injection significantly improves anesthesia's success as a supplemental pulpal anesthesia, particularly in cases of irreversible pulpitis. The aim of this study was to compare the efficacy of X-tip intraosseous injection and inferior alveolar nerve (IAN) block in primary anesthesia for mandibular posterior teeth with irreversible pulpitis. Materials and Methods: Forty emergency patients with an irreversible pulpitis of mandibular posterior teeth were randomly assigned to receive either intraosseous injection using the X-tip intraosseous injection system or IAN block as the primary injection method for pulpal anesthesia. Pulpal anesthesia was evaluated using an electric pulp tester and endo ice at 5-min intervals for 15 min. Anesthesia's success or failure rates were recorded and analyzed using SPSS version 12 statistical software. Success or failure rates were compared using a Fisher's exact test, and the time duration for the onset of anesthesia was compared using Mann–Whitney U test. P < 0.05 was considered significant. Results: Intraosseous injection system resulted in successful anesthesia in 17 out of 20 patients (85%). Successful anesthesia was achieved with the IAN block in 14 out of 20 patients (70%). However, the difference (15%) was not statistically significant (P = 0.2). Conclusion: Considering the relatively expensive armamentarium, probability of penetrator separation, temporary tachycardia, and possibility of damage to root during drilling, the authors do not suggest intraosseous injection as a suitable primary technique. PMID:23946738
Razavian, Hamid; Kazemi, Shantia; Khazaei, Saber; Jahromi, Maryam Zare
2013-03-01
Successful anesthesia during root canal therapy may be difficult to obtain. Intraosseous injection significantly improves anesthesia's success as a supplemental pulpal anesthesia, particularly in cases of irreversible pulpitis. The aim of this study was to compare the efficacy of X-tip intraosseous injection and inferior alveolar nerve (IAN) block in primary anesthesia for mandibular posterior teeth with irreversible pulpitis. Forty emergency patients with an irreversible pulpitis of mandibular posterior teeth were randomly assigned to receive either intraosseous injection using the X-tip intraosseous injection system or IAN block as the primary injection method for pulpal anesthesia. Pulpal anesthesia was evaluated using an electric pulp tester and endo ice at 5-min intervals for 15 min. Anesthesia's success or failure rates were recorded and analyzed using SPSS version 12 statistical software. Success or failure rates were compared using a Fisher's exact test, and the time duration for the onset of anesthesia was compared using Mann-Whitney U test. P < 0.05 was considered significant. Intraosseous injection system resulted in successful anesthesia in 17 out of 20 patients (85%). Successful anesthesia was achieved with the IAN block in 14 out of 20 patients (70%). However, the difference (15%) was not statistically significant (P = 0.2). Considering the relatively expensive armamentarium, probability of penetrator separation, temporary tachycardia, and possibility of damage to root during drilling, the authors do not suggest intraosseous injection as a suitable primary technique.
Bisous model-Detecting filamentary patterns in point processes
NASA Astrophysics Data System (ADS)
Tempel, E.; Stoica, R. S.; Kipper, R.; Saar, E.
2016-07-01
The cosmic web is a highly complex geometrical pattern, with galaxy clusters at the intersection of filaments and filaments at the intersection of walls. Identifying and describing the filamentary network is not a trivial task due to the overwhelming complexity of the structure, its connectivity and the intrinsic hierarchical nature. To detect and quantify galactic filaments we use the Bisous model, which is a marked point process built to model multi-dimensional patterns. The Bisous filament finder works directly with the galaxy distribution data and the model intrinsically takes into account the connectivity of the filamentary network. The Bisous model generates the visit map (the probability to find a filament at a given point) together with the filament orientation field. Using these two fields, we can extract filament spines from the data. Together with this paper we publish the computer code for the Bisous model that is made available in GitHub. The Bisous filament finder has been successfully used in several cosmological applications and further development of the model will allow to detect the filamentary network also in photometric redshift surveys, using the full redshift posterior. We also want to encourage the astro-statistical community to use the model and to connect it with all other existing methods for filamentary pattern detection and characterisation.
Evolutionary characterization of hemagglutinin gene of H9N2 influenza viruses isolated from Asia.
Shahsavandi, Shahla; Salmanian, Ali-Hatef; Ghorashi, Seyed Ali; Masoudi, Shahin; Ebrahimi, Mohammad Majid
2012-08-01
The full length hemagglutinin (HA) genes of 287 H9N2 AI strains isolated from chickens in Asia during the period 1994-2009 were genetically analyzed. Phylogenetic analysis showed that G1-like viruses circulated in the Middle East and Indian sub-continent countries, whereas other sublineages existed in Far East countries. It also revealed G1-like viruses with an average 96.7% identity clustered into two subgroups largely based on their time of isolation. The Ka/Ks ratio was calculated 0.34 for subgroup 1 and 0.57 for subgroup 2 indicates purifying/stabilizing selection, but despite this there is evidence of localized positive selection when comparing the subgroups 1 and 2 protein sequences. Five sites in HA H9N2 viruses had a posterior probability >0.5 using the Bayesian method, indicating these sites were under positive selection. These sites were found to be associated with the globular head region of HA. To identify sites under positive selection; amino acid substitution classified depends on their radicalism and neutrality. The results indicate that, although most positions in HAs were under purifying selection and can be eliminated, a few positions located in the antigenic regions and receptor binding sites were subject to positive selection. Copyright © 2011 Elsevier Ltd. All rights reserved.
Hey, Jody; Nielsen, Rasmus
2004-01-01
The genetic study of diverging, closely related populations is required for basic questions on demography and speciation, as well as for biodiversity and conservation research. However, it is often unclear whether divergence is due simply to separation or whether populations have also experienced gene flow. These questions can be addressed with a full model of population separation with gene flow, by applying a Markov chain Monte Carlo method for estimating the posterior probability distribution of model parameters. We have generalized this method and made it applicable to data from multiple unlinked loci. These loci can vary in their modes of inheritance, and inheritance scalars can be implemented either as constants or as parameters to be estimated. By treating inheritance scalars as parameters it is also possible to address variation among loci in the impact via linkage of recurrent selective sweeps or background selection. These methods are applied to a large multilocus data set from Drosophila pseudoobscura and D. persimilis. The species are estimated to have diverged approximately 500,000 years ago. Several loci have nonzero estimates of gene flow since the initial separation of the species, with considerable variation in gene flow estimates among loci, in both directions between the species. PMID:15238526
The complete mitochondrial genomes of five Eimeria species infecting domestic rabbits.
Liu, Guo-Hua; Tian, Si-Qin; Cui, Ping; Fang, Su-Fang; Wang, Chun-Ren; Zhu, Xing-Quan
2015-12-01
Rabbit coccidiosis caused by members of the genus Eimeria can cause enormous economic impact worldwide, but the genetics, epidemiology and biology of these parasites remain poorly understood. In the present study, we sequenced and annotated the complete mitochondrial (mt) genomes of five Eimeria species that commonly infect the domestic rabbits. The complete mt genomes of Eimeria intestinalis, Eimeria flavescens, Eimeria media, Eimeria vejdovskyi and Eimeria irresidua were 6261bp, 6258bp, 6168bp, 6254bp, 6259bp in length, respectively. All of the mt genomes consist of 3 genes for proteins (cytb, cox1, and cox3), 14 gene fragments for the large subunit (LSU) rRNA and 11 gene fragments for the small subunit (SSU) rRNA, but no transfer RNA (tRNA) genes. The gene order of the mt genomes is similar to that of Plasmodium, but distinct from Haemosporida and Theileria. Phylogenetic analyses based on full nucleotide sequences using Bayesian analysis revealed that the monophyly of the Eimeria of rabbits was strongly statistically supported with a Bayesian posterior probabilities. These data provide novel mtDNA markers for studying the population genetics and molecular epidemiology of the Eimeria species, and should have implications for the molecular diagnosis, prevention and control of coccidiosis in rabbits. Copyright © 2015 Elsevier Inc. All rights reserved.
The probability of object-scene co-occurrence influences object identification processes.
Sauvé, Geneviève; Harmand, Mariane; Vanni, Léa; Brodeur, Mathieu B
2017-07-01
Contextual information allows the human brain to make predictions about the identity of objects that might be seen and irregularities between an object and its background slow down perception and identification processes. Bar and colleagues modeled the mechanisms underlying this beneficial effect suggesting that the brain stocks information about the statistical regularities of object and scene co-occurrence. Their model suggests that these recurring regularities could be conceptualized along a continuum in which the probability of seeing an object within a given scene can be high (probable condition), moderate (improbable condition) or null (impossible condition). In the present experiment, we propose to disentangle the electrophysiological correlates of these context effects by directly comparing object-scene pairs found along this continuum. We recorded the event-related potentials of 30 healthy participants (18-34 years old) and analyzed their brain activity in three time windows associated with context effects. We observed anterior negativities between 250 and 500 ms after object onset for the improbable and impossible conditions (improbable more negative than impossible) compared to the probable condition as well as a parieto-occipital positivity (improbable more positive than impossible). The brain may use different processing pathways to identify objects depending on whether the probability of co-occurrence with the scene is moderate (rely more on top-down effects) or null (rely more on bottom-up influences). The posterior positivity could index error monitoring aimed to ensure that no false information is integrated into mental representations of the world.
A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system
NASA Astrophysics Data System (ADS)
Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang
2015-11-01
A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.
On Extending Temporal Models in Timed Influence Networks
2009-06-01
among variables in a system. A situation where the impact of a variable takes some time to reach the affected variable(s) cannot be modeled by either of...A1 A4 [h11(1) = 0.99, h11(0) = -0.99] [h12(1) = 0.90, h12 (0) = 0] [ h13 (1) = 0, h13 (0) = -0.90] [h14(1) =- 0.90, h14(0...the corresponding )( 1 11 xh and )( 2 12 xh . The posterior probability of B captures the impact of an affecting event on B and can be plotted as a
Escriva, Hector; Holland, Nicholas D; Gronemeyer, Hinrich; Laudet, Vincent; Holland, Linda Z
2002-06-01
Amphioxus, the closest living invertebrate relative of the vertebrates, has a notochord, segmental axial musculature, pharyngeal gill slits and dorsal hollow nerve cord, but lacks neural crest. In amphioxus, as in vertebrates, exogenous retinoic acid (RA) posteriorizes the embryo. The mouth and gill slits never form, AmphiPax1, which is normally downregulated where gill slits form, remains upregulated and AmphiHox1 expression shifts anteriorly in the nerve cord. To dissect the role of RA signaling in patterning chordate embryos, we have cloned the single retinoic acid receptor (AmphiRAR), retinoid X receptor (AmphiRXR) and an orphan receptor (AmphiTR2/4) from amphioxus. AmphiTR2/4 inhibits AmphiRAR-AmphiRXR-mediated transactivation in the presence of RA by competing for DR5 or IR7 retinoic acid response elements (RAREs). The 5' untranslated region of AmphiTR2/4 contains an IR7 element, suggesting possible auto- and RA-regulation. The patterns of AmphiTR2/4 and AmphiRAR expression during embryogenesis are largely complementary: AmphiTR2/4 is strongly expressed in the cerebral vesicle (homologous to the diencephalon plus anterior midbrain), while AmphiRAR expression is high in the equivalent of the hindbrain and spinal cord. Similarly, while AmphiTR2/4 is expressed most strongly in the anterior and posterior thirds of the endoderm, the highest AmphiRAR expression is in the middle third. Expression of AmphiRAR is upregulated by exogenous RA and completely downregulated by the RA antagonist BMS009. Moreover, BMS009 expands the pharynx posteriorly; the first three gill slit primordia are elongated and shifted posteriorly, but do not penetrate, and additional, non-penetrating gill slit primordia are induced. Thus, in an organism without neural crest, initiation and penetration of gill slits appear to be separate events mediated by distinct levels of RA signaling in the pharyngeal endoderm. Although these compounds have little effect on levels of AmphiTR2/4 expression, RA shifts pharyngeal expression of AmphiTR2/4 anteriorly, while BMS009 extends it posteriorly. Collectively, our results suggest a model for anteroposterior patterning of the amphioxus nerve cord and pharynx, which is probably applicable to vertebrates as well, in which a low anterior level of AmphiRAR (caused, at least in part, by competitive inhibition by AmphiTR2/4) is necessary for patterning the forebrain and formation of gill slits, the posterior extent of both being set by a sharp increase in the level of AmphiRAR. Supplemental data available on-line
NASA Technical Reports Server (NTRS)
Escriva, Hector; Holland, Nicholas D.; Gronemeyer, Hinrich; Laudet, Vincent; Holland, Linda Z.
2002-01-01
Amphioxus, the closest living invertebrate relative of the vertebrates, has a notochord, segmental axial musculature, pharyngeal gill slits and dorsal hollow nerve cord, but lacks neural crest. In amphioxus, as in vertebrates, exogenous retinoic acid (RA) posteriorizes the embryo. The mouth and gill slits never form, AmphiPax1, which is normally downregulated where gill slits form, remains upregulated and AmphiHox1 expression shifts anteriorly in the nerve cord. To dissect the role of RA signaling in patterning chordate embryos, we have cloned the single retinoic acid receptor (AmphiRAR), retinoid X receptor (AmphiRXR) and an orphan receptor (AmphiTR2/4) from amphioxus. AmphiTR2/4 inhibits AmphiRAR-AmphiRXR-mediated transactivation in the presence of RA by competing for DR5 or IR7 retinoic acid response elements (RAREs). The 5' untranslated region of AmphiTR2/4 contains an IR7 element, suggesting possible auto- and RA-regulation. The patterns of AmphiTR2/4 and AmphiRAR expression during embryogenesis are largely complementary: AmphiTR2/4 is strongly expressed in the cerebral vesicle (homologous to the diencephalon plus anterior midbrain), while AmphiRAR expression is high in the equivalent of the hindbrain and spinal cord. Similarly, while AmphiTR2/4 is expressed most strongly in the anterior and posterior thirds of the endoderm, the highest AmphiRAR expression is in the middle third. Expression of AmphiRAR is upregulated by exogenous RA and completely downregulated by the RA antagonist BMS009. Moreover, BMS009 expands the pharynx posteriorly; the first three gill slit primordia are elongated and shifted posteriorly, but do not penetrate, and additional, non-penetrating gill slit primordia are induced. Thus, in an organism without neural crest, initiation and penetration of gill slits appear to be separate events mediated by distinct levels of RA signaling in the pharyngeal endoderm. Although these compounds have little effect on levels of AmphiTR2/4 expression, RA shifts pharyngeal expression of AmphiTR2/4 anteriorly, while BMS009 extends it posteriorly. Collectively, our results suggest a model for anteroposterior patterning of the amphioxus nerve cord and pharynx, which is probably applicable to vertebrates as well, in which a low anterior level of AmphiRAR (caused, at least in part, by competitive inhibition by AmphiTR2/4) is necessary for patterning the forebrain and formation of gill slits, the posterior extent of both being set by a sharp increase in the level of AmphiRAR. Supplemental data available on-line.
Corneal donor tissue preparation for endothelial keratoplasty.
Woodward, Maria A; Titus, Michael; Mavin, Kyle; Shtein, Roni M
2012-06-12
Over the past ten years, corneal transplantation surgical techniques have undergone revolutionary changes. Since its inception, traditional full thickness corneal transplantation has been the treatment to restore sight in those limited by corneal disease. Some disadvantages to this approach include a high degree of post-operative astigmatism, lack of predictable refractive outcome, and disturbance to the ocular surface. The development of Descemet's stripping endothelial keratoplasty (DSEK), transplanting only the posterior corneal stroma, Descemet's membrane, and endothelium, has dramatically changed treatment of corneal endothelial disease. DSEK is performed through a smaller incision; this technique avoids 'open sky' surgery with its risk of hemorrhage or expulsion, decreases the incidence of postoperative wound dehiscence, reduces unpredictable refractive outcomes, and may decrease the rate of transplant rejection. Initially, cornea donor posterior lamellar dissection for DSEK was performed manually resulting in variable graft thickness and damage to the delicate corneal endothelial tissue during tissue processing. Automated lamellar dissection (Descemet's stripping automated endothelial keratoplasty, DSAEK) was developed to address these issues. Automated dissection utilizes the same technology as LASIK corneal flap creation with a mechanical microkeratome blade that helps to create uniform and thin tissue grafts for DSAEK surgery with minimal corneal endothelial cell loss in tissue processing. Eye banks have been providing full thickness corneas for surgical transplantation for many years. In 2006, eye banks began to develop methodologies for supplying precut corneal tissue for endothelial keratoplasty. With the input of corneal surgeons, eye banks have developed thorough protocols to safely and effectively prepare posterior lamellar tissue for DSAEK surgery. This can be performed preoperatively at the eye bank. Research shows no significant difference in terms of the quality of the tissue or patient outcomes using eye bank precut tissue versus surgeon-prepared tissue for DSAEK surgery. For most corneal surgeons, the availability of precut DSAEK corneal tissue saves time and money, and reduces the stress of performing the donor corneal dissection in the operating room. In part because of the ability of the eye banks to provide high quality posterior lamellar corneal in a timely manner, DSAEK has become the standard of care for surgical management of corneal endothelial disease. The procedure that we are describing is the preparation of the posterior lamellar cornea at the eye bank for transplantation in DSAEK surgery (Figure 1).
Cidambi, Krishna R; Robertson, Nicholas; Borges, Camille; Nassif, Nader A; Barnett, Steven L
2018-07-01
For establishing femoral component position, gap-balancing (GB) and measured resection (MR) techniques were compared using a force sensor. Ninety-one patients were randomized to undergo primary total knee arthroplasty using either MR (n = 43) or GB (n = 48) technique using a single total knee arthroplasty design. GB was performed with an instrumented tensioner. Force sensor data were obtained before the final implantation. GB resulted in greater range of femoral component rotation vs MR (1.5° ± 2.9° vs 3.1° ± 0.5°, P < .05) and posterior condylar cut thickness medially (10.2 ± 2.0 mm vs 9.0 ± 1.3 mm) and laterally (8.5 ± 1.9 mm vs 6.4 ± 1.0 mm). Force sensor data showed a decreased intercompartmental force difference at full flexion in GB (.8 ± 2.3 vs 2.0 ± 3.3u, 1u ≈ 15 N, P < .05). GB resulted in a greater range of femoral component rotation and thicker posterior condylar cuts resulting in an increased flexion space relative to MR. Intercompartmental force difference trended toward a more uniform distribution between full extension and full flexion in the GB vs MR group. Copyright © 2018 Elsevier Inc. All rights reserved.
Regional microstructural organization of the cerebral cortex is affected by preterm birth.
Bouyssi-Kobar, Marine; Brossard-Racine, Marie; Jacobs, Marni; Murnick, Jonathan; Chang, Taeun; Limperopoulos, Catherine
2018-01-01
To compare regional cerebral cortical microstructural organization between preterm infants at term-equivalent age (TEA) and healthy full-term newborns, and to examine the impact of clinical risk factors on cerebral cortical micro-organization in the preterm cohort. We prospectively enrolled very preterm infants (gestational age (GA) at birth<32 weeks; birthweight<1500 g) and healthy full-term controls. Using non-invasive 3T diffusion tensor imaging (DTI) metrics, we quantified regional micro-organization in ten cerebral cortical areas: medial/dorsolateral prefrontal cortex, anterior/posterior cingulate cortex, insula, posterior parietal cortex, motor/somatosensory/auditory/visual cortex. ANCOVA analyses were performed controlling for sex and postmenstrual age at MRI. We studied 91 preterm infants at TEA and 69 full-term controls. Preterm infants demonstrated significantly higher diffusivity in the prefrontal, parietal, motor, somatosensory, and visual cortices suggesting delayed maturation of these cortical areas. Additionally, postnatal hydrocortisone treatment was related to accelerated microstructural organization in the prefrontal and somatosensory cortices. Preterm birth alters regional microstructural organization of the cerebral cortex in both neurocognitive brain regions and areas with primary sensory/motor functions. We also report for the first time a potential protective effect of postnatal hydrocortisone administration on cerebral cortical development in preterm infants.
Asymptotic approximations to posterior distributions via conditional moment equations
Yee, J.L.; Johnson, W.O.; Samaniego, F.J.
2002-01-01
We consider asymptotic approximations to joint posterior distributions in situations where the full conditional distributions referred to in Gibbs sampling are asymptotically normal. Our development focuses on problems where data augmentation facilitates simpler calculations, but results hold more generally. Asymptotic mean vectors are obtained as simultaneous solutions to fixed point equations that arise naturally in the development. Asymptotic covariance matrices flow naturally from the work of Arnold & Press (1989) and involve the conditional asymptotic covariance matrices and first derivative matrices for conditional mean functions. When the fixed point equations admit an analytical solution, explicit formulae are subsequently obtained for the covariance structure of the joint limiting distribution, which may shed light on the use of the given statistical model. Two illustrations are given. ?? 2002 Biometrika Trust.
The ulnar collateral ligament of the human elbow joint. Anatomy, function and biomechanics.
Fuss, F K
1991-01-01
The posterior portion of the ulnar collateral ligament, which arises from the posterior surface of the medial epicondyle, is taut in maximal flexion. The anterior portion, which takes its origin from the anterior and inferior surfaces of the epicondyle, contains three functional fibre bundles. One of these is taut in maximal extension, another in intermediate positions between middle position and full flexion while the third bundle is always taut and serves as a guiding bundle. Movements of the elbow joint are checked by the ligaments well before the bony processes forming the jaws of the trochlear notch lock into the corresponding fossae on the humerus. Images Fig. 1 Fig. 2 Fig. 3 Fig. 4 PMID:2050566
Reich, Sven; Fischer, Sören; Sobotta, Bernhard; Klapper, Horst-Uwe; Gozdowski, Stephan
2010-01-01
The purpose of this preliminary study was to evaluate the clinical performance of chairside-generated crowns over a preliminary time period of 24 months. Forty-one posterior crowns made of a machinable lithium disilicate ceramic for full-contour crowns were inserted in 34 patients using a chairside computer-aided design/computer-assisted manufacturing technique. The crowns were evaluated at baseline and after 6, 12, and 24 months according to modified United States Public Health Service criteria. After 2 years, all reexamined crowns (n = 39) were in situ; one abutment exhibited secondary caries and two abutments received root canal treatment. Within the limited observation period, the crowns revealed clinically satisfying results.
Suh, Seung Woo; Modi, Hitesh N; Yang, Jaehyuk; Song, Hae-Ryong; Jang, Ki-Mo
2009-05-20
Prospective study. To determine the effectiveness and correction with posterior multilevel vertebral osteotomy in severe and rigid curves without anterior release. For the correction of severe and rigid scoliotic curve, anterior-posterior combined or posterior vertebral column resection (PVCR) procedures are used. Anterior procedure might compromise pulmonary functions, and PVCR might carry risk of neurologic injuries. Therefore, authors developed a new technique, which reduces both. Thirteen neuromuscular patients (7 cerebral palsy, 2 Duchenne muscular dystrophy, and 4 spinal muscular atrophy) who had rigid curve >100 degrees were prospectively selected. All were operated with posterior-only approach using pedicle screw construct. To achieve desired correction, posterior multilevel vertebral osteotomies were performed at 3 to 5 levels (apex, and 1-2 levels above and below apex) through partial laminotomy sites connecting from concave to convex side, just above the pedicle; and repeated cantilever manipulation was applied over temporary short-segment fixation, above and below the apex, on convex side. On concave side, rod was assembled with screws and rod-derotation maneuver was performed. Finally, short-segment fixation on convex side was replaced with full-length construct. Intraoperative MEP monitoring was applied in all. Mean age was 21 years and average follow-up was 25 months. Average preoperative flexibility was 20.3% (24.1 degrees). Average Cobb's angle, pelvic obliquity, and apical rotation were 118.2 degrees, 16.7 degrees, and 57 degrees preoperatively, respectively, and 48.8 degrees, 8 degrees, and 43 degrees after surgery showing significant correction of 59.4%, 46.1%, and 24.5%. Average number of osteotomy level was 4.2 and average blood loss was 3356 +/- 884 mL. Mean operation time was 330 +/- 46 minutes. None of the patient required postoperative ventilator support or displayed any signs of neurologic or vascular injuries during or after the operation. This technique should be recommended because (1) it provides release of anterior column without anterior approach and (2) our results supports its superiority as a technique.
The 2-10 keV unabsorbed luminosity function of AGN from the LSS, CDFS, and COSMOS surveys
NASA Astrophysics Data System (ADS)
Ranalli, P.; Koulouridis, E.; Georgantopoulos, I.; Fotopoulou, S.; Hsu, L.-T.; Salvato, M.; Comastri, A.; Pierre, M.; Cappelluti, N.; Carrera, F. J.; Chiappetti, L.; Clerc, N.; Gilli, R.; Iwasawa, K.; Pacaud, F.; Paltani, S.; Plionis, E.; Vignali, C.
2016-05-01
The XMM-Large scale structure (XMM-LSS), XMM-Cosmological evolution survey (XMM-COSMOS), and XMM-Chandra deep field south (XMM-CDFS) surveys are complementary in terms of sky coverage and depth. Together, they form a clean sample with the least possible variance in instrument effective areas and point spread function. Therefore this is one of the best samples available to determine the 2-10 keV luminosity function of active galactic nuclei (AGN) and their evolution. The samples and the relevant corrections for incompleteness are described. A total of 2887 AGN is used to build the LF in the luminosity interval 1042-1046 erg s-1 and in the redshift interval 0.001-4. A new method to correct for absorption by considering the probability distribution for the column density conditioned on the hardness ratio is presented. The binned luminosity function and its evolution is determined with a variant of the Page-Carrera method, which is improved to include corrections for absorption and to account for the full probability distribution of photometric redshifts. Parametric models, namely a double power law with luminosity and density evolution (LADE) or luminosity-dependent density evolution (LDDE), are explored using Bayesian inference. We introduce the Watanabe-Akaike information criterion (WAIC) to compare the models and estimate their predictive power. Our data are best described by the LADE model, as hinted by the WAIC indicator. We also explore the recently proposed 15-parameter extended LDDE model and find that this extension is not supported by our data. The strength of our method is that it provides unabsorbed, non-parametric estimates, credible intervals for luminosity function parameters, and a model choice based on predictive power for future data. Based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA member states and NASA.Tables with the samples of the posterior probability distributions are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/590/A80
A reversible-jump Markov chain Monte Carlo algorithm for 1D inversion of magnetotelluric data
NASA Astrophysics Data System (ADS)
Mandolesi, Eric; Ogaya, Xenia; Campanyà, Joan; Piana Agostinetti, Nicola
2018-04-01
This paper presents a new computer code developed to solve the 1D magnetotelluric (MT) inverse problem using a Bayesian trans-dimensional Markov chain Monte Carlo algorithm. MT data are sensitive to the depth-distribution of rock electric conductivity (or its reciprocal, resistivity). The solution provided is a probability distribution - the so-called posterior probability distribution (PPD) for the conductivity at depth, together with the PPD of the interface depths. The PPD is sampled via a reversible-jump Markov Chain Monte Carlo (rjMcMC) algorithm, using a modified Metropolis-Hastings (MH) rule to accept or discard candidate models along the chains. As the optimal parameterization for the inversion process is generally unknown a trans-dimensional approach is used to allow the dataset itself to indicate the most probable number of parameters needed to sample the PPD. The algorithm is tested against two simulated datasets and a set of MT data acquired in the Clare Basin (County Clare, Ireland). For the simulated datasets the correct number of conductive layers at depth and the associated electrical conductivity values is retrieved, together with reasonable estimates of the uncertainties on the investigated parameters. Results from the inversion of field measurements are compared with results obtained using a deterministic method and with well-log data from a nearby borehole. The PPD is in good agreement with the well-log data, showing as a main structure a high conductive layer associated with the Clare Shale formation. In this study, we demonstrate that our new code go beyond algorithms developend using a linear inversion scheme, as it can be used: (1) to by-pass the subjective choices in the 1D parameterizations, i.e. the number of horizontal layers in the 1D parameterization, and (2) to estimate realistic uncertainties on the retrieved parameters. The algorithm is implemented using a simple MPI approach, where independent chains run on isolated CPU, to take full advantage of parallel computer architectures. In case of a large number of data, a master/slave appoach can be used, where the master CPU samples the parameter space and the slave CPUs compute forward solutions.
Angulated Dental Implants in Posterior Maxilla FEA and Experimental Verification.
Hamed, Hamed A; Marzook, Hamdy A; Ghoneem, Nahed E; El-Anwar, Mohamed I
2018-02-15
This study aimed to evaluate the effect of different implant angulations in posterior maxilla on stress distribution by finite element analysis and verify its results experimentally. Two simplified models were prepared for an implant placed vertically and tilted 25° piercing the maxillary sinus. Geometric models' components were prepared by Autodesk Inventor then assembled in ANSYS for finite element analysis. The results of finite element analysis were verified against experimental trials results which were statistically analysed using student t-test (level of significance p < 0.05). Implant - abutment complex absorbed the load energy in case of vertical implant better than the case of angulated one. That was reflected on cortical bone stress, while both cases showed stress levels within the physiological limits. Comparing results between FEA and experiment trials showed full agreement. It was found that the tilted implant by 25° can be utilised in the posterior region maxilla for replacing maxillary first molar avoiding sinus penetration. The implant-bone interface and peri-implant bones received the highest Von Mises stress. Implant - bone interface with angulated implant received about 66% more stresses than the straight one.
AlBader, Bader; AlHelal, Abdulaziz; Proussaefs, Periklis; Garbacea, Antonela; Kattadiyil, Mathew T; Lozada, Jaime
Implant-supported fixed complete dentures, often referred to as hybrid prostheses, have been associated with high implant survival rates but also with a high incidence of mechanical prosthetic complications. The most frequent of these complications have been fracture and wear of the veneering material. The proposed design concept incorporates the occlusal surfaces of the posterior teeth as part of a digital milled metal framework by designing the posterior first molars in full contour as part of the framework. The framework can be designed, scanned, and milled from a titanium blank using a milling machine. Acrylic resin teeth can then be placed on the framework by conventional protocol. The metal occlusal surfaces of the titanium-countered molars will be at centric occlusion. It is hypothesized that metal occlusal surfaces in the posterior region may reduce occlusal wear in these types of prostheses. When the proposed design protocol is followed, the connection between the metal frame and the cantilever part of the prosthesis is reinforced, which may lead to fewer fractures of the metal framework.
Cerebellar development in childhood onset schizophrenia and non-psychotic siblings
Greenstein, Deanna; Lenroot, Rhoshel; Clausen, Liv; Gogtay, Nitin; Rapoport, Judith
2011-01-01
We explored regional and total volumetric cerebellar differences in probands and their unaffected full siblings relative to typically developing participants. Participants included 94 (51 males) patients diagnosed with childhood onset schizophrenia (COS), 80 related non-psychotic siblings (37 males) and 110 (64 males) typically developing participants scanned longitudinally. The sample mean age was 16.87(SD=4.7; range 6.5 to 29). We performed mixed model regressions to examine group differences in trajectory and volume. The COS group had smaller bilateral anterior lobes and anterior and total vermis volumes than controls. The COS group diverged from controls over time in total, left, right, and bilateral posterior inferior cerebellum. Siblings did not have any fixed volumetric differences relative to controls but differed from controls in developmental trajectories of total and right cerebellum, left inferior posterior, left superior posterior, and superior vermis. Results are consistent with previous COS findings and several reports of decreased cerebellar volume in adult onset schizophrenia. Sibling trajectories may represent a trait marker, although the effect size for volumetric differences in early adulthood may be small. PMID:21803550
Traffic Video Image Segmentation Model Based on Bayesian and Spatio-Temporal Markov Random Field
NASA Astrophysics Data System (ADS)
Zhou, Jun; Bao, Xu; Li, Dawei; Yin, Yongwen
2017-10-01
Traffic video image is a kind of dynamic image and its background and foreground is changed at any time, which results in the occlusion. In this case, using the general method is more difficult to get accurate image segmentation. A segmentation algorithm based on Bayesian and Spatio-Temporal Markov Random Field is put forward, which respectively build the energy function model of observation field and label field to motion sequence image with Markov property, then according to Bayesian' rule, use the interaction of label field and observation field, that is the relationship of label field’s prior probability and observation field’s likelihood probability, get the maximum posterior probability of label field’s estimation parameter, use the ICM model to extract the motion object, consequently the process of segmentation is finished. Finally, the segmentation methods of ST - MRF and the Bayesian combined with ST - MRF were analyzed. Experimental results: the segmentation time in Bayesian combined with ST-MRF algorithm is shorter than in ST-MRF, and the computing workload is small, especially in the heavy traffic dynamic scenes the method also can achieve better segmentation effect.
Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L
2010-07-01
This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.
A coupled hidden Markov model for disease interactions
Sherlock, Chris; Xifara, Tatiana; Telfer, Sandra; Begon, Mike
2013-01-01
To investigate interactions between parasite species in a host, a population of field voles was studied longitudinally, with presence or absence of six different parasites measured repeatedly. Although trapping sessions were regular, a different set of voles was caught at each session, leading to incomplete profiles for all subjects. We use a discrete time hidden Markov model for each disease with transition probabilities dependent on covariates via a set of logistic regressions. For each disease the hidden states for each of the other diseases at a given time point form part of the covariate set for the Markov transition probabilities from that time point. This allows us to gauge the influence of each parasite species on the transition probabilities for each of the other parasite species. Inference is performed via a Gibbs sampler, which cycles through each of the diseases, first using an adaptive Metropolis–Hastings step to sample from the conditional posterior of the covariate parameters for that particular disease given the hidden states for all other diseases and then sampling from the hidden states for that disease given the parameters. We find evidence for interactions between several pairs of parasites and of an acquired immune response for two of the parasites. PMID:24223436
Function of the medial meniscus in force transmission and stability.
Walker, Peter S; Arno, Sally; Bell, Christopher; Salvadore, Gaia; Borukhov, Ilya; Oh, Cheongeun
2015-06-01
We studied the combined role of the medial meniscus in distributing load and providing stability. Ten normal knees were loaded in combinations of compressive and shear loading as the knee was flexed over a full range. A digital camera tracked the motion, from which femoral-tibial contacts were determined by computer modelling. Load transmission was determined from the Tekscan for the anterior horn, central body, posterior horn, and the uncovered cartilage in the centre of the meniscus. For the three types of loading; compression only, compression and anterior shear, compression and posterior shear; between 40% and 80% of the total load was transmitted through the meniscus. The overall average was 58%, the remaining 42% being transmitted through the uncovered cartilage. The anterior horn was loaded only up to 30 degrees flexion, but played a role in controlling anterior femoral displacement. The central body was loaded 10-20% which would provide some restraint to medial femoral subluxation. Overall the posterior horn carried the highest percentage of the shear load, especially after 30 degrees flexion when a posterior shear force was applied, where the meniscus was estimated to carry 50% of the shear force. This study added new insights into meniscal function during weight bearing conditions, particularly its role in early flexion, and in transmitting shear forces. Copyright © 2015 Elsevier Ltd. All rights reserved.
A late neurological complication following posterior correction surgery of severe cervical kyphosis.
Hojo, Yoshihiro; Ito, Manabu; Abumi, Kuniyoshi; Kotani, Yoshihisa; Sudo, Hideki; Takahata, Masahiko; Minami, Akio
2011-06-01
Though a possible cause of late neurological deficits after posterior cervical reconstruction surgery was reported to be an iatrogenic foraminal stenosis caused not by implant malposition but probably by posterior shift of the lateral mass induced by tightening screws and plates, its clinical features and pathomechanisms remain unclear. The aim of this retrospective clinical review was to investigate the clinical features of these neurological complications and to analyze the pathomechanisms by reviewing pre- and post-operative imaging studies. Among 227 patients who underwent cervical stabilization using cervical pedicle screws (CPSs), six patients who underwent correction of cervical kyphosis showed postoperative late neurological complications without any malposition of CPS (ND group). The clinical courses of the patients with deficits were reviewed from the medical records. Radiographic assessment of the sagittal alignment was conducted using lateral radiographs. The diameter of the neural foramen was measured on preoperative CT images. These results were compared with the other 14 patients who underwent correction of cervical kyphosis without late postoperative neurological complications (non-ND group). The six patients in the ND group showed no deficits in the immediate postoperative periods, but unilateral muscle weakness of the deltoid and biceps brachii occurred at 2.8 days postoperatively on average. Preoperative sagittal alignment of fusion area showed significant kyphosis in the ND group. The average of kyphosis correction in the ND was 17.6° per fused segment (range 9.7°-35.0°), and 4.5° (range 1.3°-10.0°) in the non-ND group. A statistically significant difference was observed in the degree of preoperative kyphosis and the correction angles at C4-5 between the two groups. The diameter of the C4-5 foramen on the side of deficits was significantly smaller than that of the opposite side in the ND group. Late postoperative neurological complications after correction of cervical kyphosis were highly associated with a large amount of kyphosis correction, which may lead foraminal stenosis and enhance posterior drift of the spinal cord. These factors may lead to both compression and traction of the nerves, which eventually cause late neurological deficits. To avoid such complications, excessive kyphosis correction should not be performed during posterior surgery to avoid significant posterior shift of the spinal cord and prophylactic foraminotomies are recommended if narrow neuroforamina were evident on preoperative CT images. Regardless of revision decompression or observation, the majority of this late neurological complication showed complete recovery over time.
Spatially explicit models for inference about density in unmarked or partially marked populations
Chandler, Richard B.; Royle, J. Andrew
2013-01-01
Recently developed spatial capture–recapture (SCR) models represent a major advance over traditional capture–recapture (CR) models because they yield explicit estimates of animal density instead of population size within an unknown area. Furthermore, unlike nonspatial CR methods, SCR models account for heterogeneity in capture probability arising from the juxtaposition of animal activity centers and sample locations. Although the utility of SCR methods is gaining recognition, the requirement that all individuals can be uniquely identified excludes their use in many contexts. In this paper, we develop models for situations in which individual recognition is not possible, thereby allowing SCR concepts to be applied in studies of unmarked or partially marked populations. The data required for our model are spatially referenced counts made on one or more sample occasions at a collection of closely spaced sample units such that individuals can be encountered at multiple locations. Our approach includes a spatial point process for the animal activity centers and uses the spatial correlation in counts as information about the number and location of the activity centers. Camera-traps, hair snares, track plates, sound recordings, and even point counts can yield spatially correlated count data, and thus our model is widely applicable. A simulation study demonstrated that while the posterior mean exhibits frequentist bias on the order of 5–10% in small samples, the posterior mode is an accurate point estimator as long as adequate spatial correlation is present. Marking a subset of the population substantially increases posterior precision and is recommended whenever possible. We applied our model to avian point count data collected on an unmarked population of the northern parula (Parula americana) and obtained a density estimate (posterior mode) of 0.38 (95% CI: 0.19–1.64) birds/ha. Our paper challenges sampling and analytical conventions in ecology by demonstrating that neither spatial independence nor individual recognition is needed to estimate population density—rather, spatial dependence can be informative about individual distribution and density.
Isolated transient vertigo: posterior circulation ischemia or benign origin?
Blasberg, Tobias F; Wolf, Lea; Henke, Christian; Lorenz, Matthias W
2017-06-14
Isolated transient vertigo can be the only symptom of posterior circulation ischemia. Thus, it is important to differentiate isolated vertigo of a cerebrovascular origin from that of more benign origins, as patients with cerebral ischemia have a much higher risk for future stroke than do those with 'peripheral' vertigo. The current study aims to identify risk factors for cerebrovascular origin of isolated transient vertigo, and for future cerebrovascular events. From the files of 339 outpatients with isolated transient vertigo we extracted history, clinical and technical findings, diagnosis, and follow-up information on subsequent stroke or transient ischemic attack (TIA). Risk factors were analyzed using multivariate regression models (logistic or Cox) and reconfirmed in univariate analyses. On first presentation, 48 (14.2%) patients received the diagnosis 'probable or definite cerebrovascular vertigo'. During follow-up, 41 patients suffered stroke or TIA (event rate 7.9 per 100 person years, 95% confidence interval (CI) 5.5-10.4), 26 in the posterior circulation (event rate 4.8 per 100 person years, 95% CI 3.0-6.7). The diagnosis was not associated with follow-up cerebrovascular events. In multivariate models testing multiple potential determinants, only the presentation mode was consistently associated with the diagnosis and stroke risk: patients who presented because of vertigo (rather than reporting vertigo when they presented for other reasons) had a significantly higher risk for future stroke or TIA (p = 0.028, event rate 13.4 vs. 5.4 per 100 person years) and for future posterior circulation stroke or TIA (p = 0.044, event rate 7.8 vs. 3.5 per 100 person years). We here report for the first time follow-up stroke rates in patients with transient isolated vertigo. In such patients, the identification of those with cerebrovascular origin remains difficult, and presentation mode was found to be the only consistent risk factor. Confirmation in an independent prospective sample is needed.
Lenticular astigmatism after penetrating eye injury.
Rumelt, S; Jager, G; Rehany, U
1996-09-01
Lenticular astigmatism of 5.00 diopters developed after penetrating injury in the eye of a 16-year-old boy. Full visual acuity, refraction, and crystalline lens clarity remained stable for more than 2 years. The high astigmatism, in conjunction with a spherical cornea and posterior lens capsule striae, indicates the lenticular origin of the astigmatism.
Wang, Tingting; Chen, Yi-Ping Phoebe; Bowman, Phil J; Goddard, Michael E; Hayes, Ben J
2016-09-21
Bayesian mixture models in which the effects of SNP are assumed to come from normal distributions with different variances are attractive for simultaneous genomic prediction and QTL mapping. These models are usually implemented with Monte Carlo Markov Chain (MCMC) sampling, which requires long compute times with large genomic data sets. Here, we present an efficient approach (termed HyB_BR), which is a hybrid of an Expectation-Maximisation algorithm, followed by a limited number of MCMC without the requirement for burn-in. To test prediction accuracy from HyB_BR, dairy cattle and human disease trait data were used. In the dairy cattle data, there were four quantitative traits (milk volume, protein kg, fat% in milk and fertility) measured in 16,214 cattle from two breeds genotyped for 632,002 SNPs. Validation of genomic predictions was in a subset of cattle either from the reference set or in animals from a third breeds that were not in the reference set. In all cases, HyB_BR gave almost identical accuracies to Bayesian mixture models implemented with full MCMC, however computational time was reduced by up to 1/17 of that required by full MCMC. The SNPs with high posterior probability of a non-zero effect were also very similar between full MCMC and HyB_BR, with several known genes affecting milk production in this category, as well as some novel genes. HyB_BR was also applied to seven human diseases with 4890 individuals genotyped for around 300 K SNPs in a case/control design, from the Welcome Trust Case Control Consortium (WTCCC). In this data set, the results demonstrated again that HyB_BR performed as well as Bayesian mixture models with full MCMC for genomic predictions and genetic architecture inference while reducing the computational time from 45 h with full MCMC to 3 h with HyB_BR. The results for quantitative traits in cattle and disease in humans demonstrate that HyB_BR can perform equally well as Bayesian mixture models implemented with full MCMC in terms of prediction accuracy, but with up to 17 times faster than the full MCMC implementations. The HyB_BR algorithm makes simultaneous genomic prediction, QTL mapping and inference of genetic architecture feasible in large genomic data sets.
Labudda, Kirsten; Woermann, Friedrich G; Mertens, Markus; Pohlmann-Eden, Bernd; Markowitsch, Hans J; Brand, Matthias
2008-06-01
Recent functional neuroimaging and lesion studies demonstrate the involvement of the orbitofrontal/ventromedial prefrontal cortex as a key structure in decision making processes. This region seems to be particularly crucial when contingencies between options and consequences are unknown but have to be learned by the use of feedback following previous decisions (decision making under ambiguity). However, little is known about the neural correlates of decision making under risk conditions in which information about probabilities and potential outcomes is given. In the present study, we used functional magnetic resonance imaging to measure blood-oxygenation-level-dependent (BOLD) responses in 12 subjects during a decision making task. This task provided explicit information about probabilities and associated potential incentives. The responses were compared to BOLD signals in a control condition without information about incentives. In contrast to previous decision making studies, we completely removed the outcome phase following a decision to exclude the potential influence of feedback previously received on current decisions. The results indicate that the integration of information about probabilities and incentives leads to activations within the dorsolateral prefrontal cortex, the posterior parietal lobe, the anterior cingulate and the right lingual gyrus. We assume that this pattern of activation is due to the involvement of executive functions, conflict detection mechanisms and arithmetic operations during the deliberation phase of decisional processes that are based on explicit information.
Effects of stop-signal probability in the stop-signal paradigm: the N2/P3 complex further validated.
Ramautar, J R; Kok, A; Ridderinkhof, K R
2004-11-01
The aim of this study was to examine the effects of frequency of occurrence of stop signals in the stop-signal paradigm. Presenting stop signals less frequently resulted in faster reaction times to the go stimulus and a lower probability of inhibition. Also, go stimuli elicited larger and somewhat earlier P3 responses when stop signals occurred less frequently. Since the amplitude effect was more pronounced on trials when go signals were followed by fast than slow reactions, it probably reflected a stronger set to produce fast responses. N2 and P3 components to stop signals were observed to be larger and of longer latency when stop signals occurred less frequently. The amplitude enhancement of these N2 and P3 components were more pronounced for unsuccessful than for successful stop-signal trials. Moreover, the successfully inhibited stop trials elicited a frontocentral P3 whereas unsuccessfully inhibited stop trials elicited a more posterior P3 that resembled the classical P3b. P3 amplitude in the unsuccessfully inhibited condition also differed between waveforms synchronized with the stop signal and waveforms synchronized with response onset whereas N2 amplitude did not. Taken together these findings suggest that N2 reflected a greater significance of failed inhibitions after low probability stop signals while P3 reflected continued processing of the erroneous response after response execution.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Sea Ice Concentration Estimation Using Active and Passive Remote Sensing Data Fusion
NASA Astrophysics Data System (ADS)
Zhang, Y.; Li, F.; Zhang, S.; Zhu, T.
2017-12-01
In this abstract, a decision-level fusion method by utilizing SAR and passive microwave remote sensing data for sea ice concentration estimation is investigated. Sea ice concentration product from passive microwave concentration retrieval methods has large uncertainty within thin ice zone. Passive microwave data including SSM/I, AMSR-E, and AMSR-2 provide daily and long time series observations covering whole polar sea ice scene, and SAR images provide rich sea ice details with high spatial resolution including deformation and polarimetric features. In the proposed method, the merits from passive microwave data and SAR data are considered. Sea ice concentration products from ASI and sea ice category label derived from CRF framework in SAR imagery are calibrated under least distance protocol. For SAR imagery, incident angle and azimuth angle were used to correct backscattering values from slant range to ground range in order to improve geocoding accuracy. The posterior probability distribution between category label from SAR imagery and passive microwave sea ice concentration product is modeled and integrated under Bayesian network, where Gaussian statistical distribution from ASI sea ice concentration products serves as the prior term, which represented as an uncertainty of sea ice concentration. Empirical model based likelihood term is constructed under Bernoulli theory, which meets the non-negative and monotonically increasing conditions. In the posterior probability estimation procedure, final sea ice concentration is obtained using MAP criterion, which equals to minimize the cost function and it can be calculated with nonlinear iteration method. The proposed algorithm is tested on multiple satellite SAR data sets including GF-3, Sentinel-1A, RADARSAT-2 and Envisat ASAR. Results show that the proposed algorithm can improve the accuracy of ASI sea ice concentration products and reduce the uncertainty along the ice edge.
Identification of transmissivity fields using a Bayesian strategy and perturbative approach
NASA Astrophysics Data System (ADS)
Zanini, Andrea; Tanda, Maria Giovanna; Woodbury, Allan D.
2017-10-01
The paper deals with the crucial problem of the groundwater parameter estimation that is the basis for efficient modeling and reclamation activities. A hierarchical Bayesian approach is developed: it uses the Akaike's Bayesian Information Criteria in order to estimate the hyperparameters (related to the covariance model chosen) and to quantify the unknown noise variance. The transmissivity identification proceeds in two steps: the first, called empirical Bayesian interpolation, uses Y* (Y = lnT) observations to interpolate Y values on a specified grid; the second, called empirical Bayesian update, improve the previous Y estimate through the addition of hydraulic head observations. The relationship between the head and the lnT has been linearized through a perturbative solution of the flow equation. In order to test the proposed approach, synthetic aquifers from literature have been considered. The aquifers in question contain a variety of boundary conditions (both Dirichelet and Neuman type) and scales of heterogeneities (σY2 = 1.0 and σY2 = 5.3). The estimated transmissivity fields were compared to the true one. The joint use of Y* and head measurements improves the estimation of Y considering both degrees of heterogeneity. Even if the variance of the strong transmissivity field can be considered high for the application of the perturbative approach, the results show the same order of approximation of the non-linear methods proposed in literature. The procedure allows to compute the posterior probability distribution of the target quantities and to quantify the uncertainty in the model prediction. Bayesian updating has advantages related both to the Monte-Carlo (MC) and non-MC approaches. In fact, as the MC methods, Bayesian updating allows computing the direct posterior probability distribution of the target quantities and as non-MC methods it has computational times in the order of seconds.
Harmouche, Rola; Subbanna, Nagesh K; Collins, D Louis; Arnold, Douglas L; Arbel, Tal
2015-05-01
In this paper, a fully automatic probabilistic method for multiple sclerosis (MS) lesion classification is presented, whereby the posterior probability density function over healthy tissues and two types of lesions (T1-hypointense and T2-hyperintense) is generated at every voxel. During training, the system explicitly models the spatial variability of the intensity distributions throughout the brain by first segmenting it into distinct anatomical regions and then building regional likelihood distributions for each tissue class based on multimodal magnetic resonance image (MRI) intensities. Local class smoothness is ensured by incorporating neighboring voxel information in the prior probability through Markov random fields. The system is tested on two datasets from real multisite clinical trials consisting of multimodal MRIs from a total of 100 patients with MS. Lesion classification results based on the framework are compared with and without the regional information, as well as with other state-of-the-art methods against the labels from expert manual raters. The metrics for comparison include Dice overlap, sensitivity, and positive predictive rates for both voxel and lesion classifications. Statistically significant improvements in Dice values ( ), for voxel-based and lesion-based sensitivity values ( ), and positive predictive rates ( and respectively) are shown when the proposed method is compared to the method without regional information, and to a widely used method [1]. This holds particularly true in the posterior fossa, an area where classification is very challenging. The proposed method allows us to provide clinicians with accurate tissue labels for T1-hypointense and T2-hyperintense lesions, two types of lesions that differ in appearance and clinical ramifications, and with a confidence level in the classification, which helps clinicians assess the classification results.
Epstein, Nancy E.
2014-01-01
What are the risks, benefits, alternatives, and pitfalls for operating on cervical ossification of the posterior longitudinal ligament (OPLL)? To successfully diagnose OPLL, it is important to obtain Magnetic Resonance Images (MR). These studies, particularly the T2 weighted images, provide the best soft-tissue documentation of cord/root compression and intrinsic cord abnormalities (e.g. edema vs. myelomalacia) on sagittal, axial, and coronal views. Obtaining Computed Tomographic (CT) scans is also critical as they best demonstrate early OPLL, or hypertrophied posterior longitudinal ligament (HPLL: hypo-isodense with punctate ossification) or classic (frankly ossified) OPLL (hyperdense). Furthermore, CT scans reveal the “single layer” and “double layer” signs indicative of OPLL penetrating the dura. Documenting the full extent of OPLL with both MR and CT dictates whether anterior, posterior, or circumferential surgery is warranted. An adequate cervical lordosis allows for posterior cervical approaches (e.g. lamionplasty, laminectomy/fusion), which may facilitate addressing multiple levels while avoiding the risks of anterior procedures. However, without lordosis and with significant kyphosis, anterior surgery may be indicated. Rarely, this requires single/multilevel anterior cervical diskectomy/fusion (ACDF), as this approach typically fails to address retrovertebral OPLL; single or multilevel corpectomies are usually warranted. In short, successful OPLL surgery relies on careful patient selection (e.g. assess comorbidities), accurate MR/CT documentation of OPLL, and limiting the pros, cons, and complications of these complex procedures by choosing the optimal surgical approach. Performing OPLL surgery requires stringent anesthetic (awake intubation/positioning) and also the following intraoperative monitoring protocols: Somatosensory evoked potentials (SSEP), motor evoked potentials (MEP), and electromyography (EMG). PMID:24843819
Material Properties of the Posterior Human Sclera☆
Grytz, Rafael; Fazio, Massimo A.; Girard, Michael J.A.; Libertiaux, Vincent; Bruno, Luigi; Gardiner, Stuart; Girkin, Christopher A.; Downs, J. Crawford
2013-01-01
To characterize the material properties of posterior and peripapillary sclera from human donors, and to investigate the macro- and micro-scale strains as potential control mechanisms governing mechanical homeostasis. Posterior scleral shells from 9 human donors aged 57–90 years were subjected to IOP elevations from 5 to 45 mmHg and the resulting full-field displacements were recorded using laser speckle interferometry. Eye-specific finite element models were generated based on experimentally measured scleral shell surface geometry and thickness. Inverse numerical analyses were performed to identify material parameters for each eye by matching experimental deformation measurements to model predictions using a microstructure-based constitutive formulation that incorporates the crimp response and anisotropic architecture of scleral collagen fibrils. The material property fitting produced models that fit both the overall and local deformation responses of posterior scleral shells very well. The nonlinear stiffening of the sclera with increasing IOP was well reproduced by the uncrimping of scleral collagen fibrils, and a circumferentially-aligned ring of collagen fibrils around the scleral canal was predicted in all eyes. Macroscopic in-plane strains were significantly higher in peripapillary region then in the mid-periphery. In contrast, the meso- and micro-scale strains at the collagen network and collagen fibril level were not significantly different between regions. The elastic response of the posterior human sclera can be characterized by the anisotropic architecture and crimp response of scleral collagen fibrils. The similar collagen fibril strains in the peripapillary and mid-peripheral regions support the notion that the scleral collagen architecture including the circumpapillary ring of collagen fibrils evolved to establish optimal load bearing conditions at the collagen fibril level. PMID:23684352
Cacciamani, G; De Marco, V; Siracusano, S; De Marchi, D; Bizzotto, L; Cerruto, M A; Motton, G; Porcaro, A B; Artibani, W
2017-06-01
A training model is usually needed to teach robotic surgical technique successfully. In this way, an ideal training model should mimic as much as possible the "in vivo" procedure and allow several consecutive surgical simulations. The goal of this study was to create a "wet lab" model suitable for RARP training programs, providing the simulation of the posterior fascial reconstruction. The second aim was to compare the original "Venezuelan" chicken model described by Sotelo to our training model. Our training model consists of performing an anastomosis, reproducing the surgical procedure in "vivo" as in RARP, between proventriculus and the proximal portion of the esophagus. A posterior fascial reconstruction simulating Rocco's stitch is performed between the tissues located under the posterior surface of the esophagus and the tissue represented by the serosa of the proventriculus. From 2014 to 2015, during 6 different full-immersion training courses, thirty-four surgeons performed the urethrovesical anastomosis using our model and the Sotelo's one. After the training period, each surgeon was asked to fill out a non-validated questionnaire to perform an evaluation of the differences between the two training models. Our model was judged the best model, in terms of similarity with urethral tissue and similarity with the anatomic unit urethra-pelvic wall. Our training model as reported by all trainees is easily reproducible and anatomically comparable with the urethrovesical anastomosis as performed during radical prostatectomy in humans. It is suitable for performing posterior fascial reconstruction reported by Rocco. In this context, our surgical training model could be routinely proposed in all robotic training courses to develop specific expertise in urethrovesical anastomosis with the reproducibility of the Rocco stitch.
Lai, Yu-Shu; Chen, Wen-Chuan; Huang, Chang-Hung; Cheng, Cheng-Kung; Chan, Kam-Kong; Chang, Ting-Kuo
2015-01-01
Surgical reconstruction is generally recommended for posterior cruciate ligament (PCL) injuries; however, the use of grafts is still a controversial problem. In this study, a three-dimensional finite element model of the human tibiofemoral joint with articular cartilage layers, menisci, and four main ligaments was constructed to investigate the effects of graft strengths on knee kinematics and in-situ forces of PCL grafts. Nine different graft strengths with stiffness ranging from 0% (PCL rupture) to 200%, in increments of 25%, of an intact PCL’s strength were used to simulate the PCL reconstruction. A 100 N posterior tibial drawer load was applied to the knee joint at full extension. Results revealed that the maximum posterior translation of the PCL rupture model (0% stiffness) was 6.77 mm in the medial compartment, which resulted in tibial internal rotation of about 3.01°. After PCL reconstruction with any graft strength, the laxity of the medial tibial compartment was noticeably improved. Tibial translation and rotation were similar to the intact knee after PCL reconstruction with graft strengths ranging from 75% to 125% of an intact PCL. When the graft’s strength surpassed 150%, the medial tibia moved forward and external tibial rotation greatly increased. The in-situ forces generated in the PCL grafts ranged from 13.15 N to 75.82 N, depending on the stiffness. In conclusion, the strength of PCL grafts have has a noticeable effect on anterior-posterior translation of the medial tibial compartment and its in-situ force. Similar kinematic response may happen in the models when the PCL graft’s strength lies between 75% and 125% of an intact PCL. PMID:26001045
Lai, Yu-Shu; Chen, Wen-Chuan; Huang, Chang-Hung; Cheng, Cheng-Kung; Chan, Kam-Kong; Chang, Ting-Kuo
2015-01-01
Surgical reconstruction is generally recommended for posterior cruciate ligament (PCL) injuries; however, the use of grafts is still a controversial problem. In this study, a three-dimensional finite element model of the human tibiofemoral joint with articular cartilage layers, menisci, and four main ligaments was constructed to investigate the effects of graft strengths on knee kinematics and in-situ forces of PCL grafts. Nine different graft strengths with stiffness ranging from 0% (PCL rupture) to 200%, in increments of 25%, of an intact PCL's strength were used to simulate the PCL reconstruction. A 100 N posterior tibial drawer load was applied to the knee joint at full extension. Results revealed that the maximum posterior translation of the PCL rupture model (0% stiffness) was 6.77 mm in the medial compartment, which resulted in tibial internal rotation of about 3.01°. After PCL reconstruction with any graft strength, the laxity of the medial tibial compartment was noticeably improved. Tibial translation and rotation were similar to the intact knee after PCL reconstruction with graft strengths ranging from 75% to 125% of an intact PCL. When the graft's strength surpassed 150%, the medial tibia moved forward and external tibial rotation greatly increased. The in-situ forces generated in the PCL grafts ranged from 13.15 N to 75.82 N, depending on the stiffness. In conclusion, the strength of PCL grafts have has a noticeable effect on anterior-posterior translation of the medial tibial compartment and its in-situ force. Similar kinematic response may happen in the models when the PCL graft's strength lies between 75% and 125% of an intact PCL.
Ioannidis, Alexis; Bindl, Andreas
2016-04-01
Only a few studies exist, which assess the clinical long-term behavior of all-ceramic FDPs in the posterior region. The aim of the present prospective clinical study was to evaluate the clinical performance of posterior three-unit FDPs manufactured from Y-TZP after a service period up to 10 years. 55 patients received 59 three-unit FDPs in the posterior region of the maxilla or mandible. Abutment teeth were prepared and full-arch impressions were taken. Definitive casts were fabricated and optically scanned. Frameworks were fabricated with computer-aided design (CAD) and manufacturing (CAM) technology. Y-TZP frameworks were veneered and adhesively luted to the abutment teeth. Baseline and follow-up examinations (service time: ≥ 48 months) were recorded by applying modified United States Public Health Services (USPHS) rating criteria. Cumulative survival rate was analyzed with Kaplan-Meier. Percentage of biological and technical complication was calculated. Fifty-three patients with 57 FDPs attended the last follow-up visit and a mean observation period of the remaining was 6.3 ± 1.9 years was calculated. Biological complications occurred in 17.5%, technical complications in 28% of the FDPs. The 10-year cumulative survival rate amounted 85.0%. Three FDPs failed to survive, two due to a root fracture of the abutment tooth and one due to secondary caries. Three-unit FDPs made from Y-TZP, veneered with ceramic offer a treatment option with a high rate of chipping. However, the manufacturing processes nowadays are modified in order to avoid this complication. The results of the present investigation suggest that three-unit Y-TZP posterior FDPs may are a possible treatment option. However, a high rate of chipping can be expected. Copyright © 2016. Published by Elsevier Ltd.
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
NASA Astrophysics Data System (ADS)
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure for the hydrological problem considered. This work was supported, in part, by the U.S. Dept. of Energy under Contract No. DE-AC02-05CH11231
Tribus, Clifford B; Garvey, Kathleen E
2003-05-15
A case report describes unilateral complete laminar erosion of the caudal thoracic spine and late-presenting infection in a patient 10 years after anteroposterior reconstruction for scoliosis. To present an unusual but significant complication that may occur after implantation of spinal instrumentation. The reported patient presented with a deep infection and persistent back pain 10 years after successful anteroposterior reconstruction for adult idiopathic scoliosis. Delayed onset infections after implantation of spinal instrumentation are infrequent, yet when present, often require hardware removal. The case of a 51-year-old woman who underwent irrigation and debridement for a late-presenting infection and removal of posterior hardware 10 years after her index procedure is presented. Interoperatively, it was noted that full-thickness laminar erosion was present from T4 to T12. The patient was taken to the operating room for wound irrigation, debridement, and hardware removal. It was discovered that a Cotrel-Dubousset rod placed on the convexity of the curve had completely eroded through the lamina of T7-T12. Infectious material was found along the entire length of both the convex and concave Cotrel-Dubousset rods. Intraoperative cultures grew Staphylococcus epidermidis and Propionibacterium acnes. Intravenous and oral antibiotics were administered, resulting in resolution of the infection and preoperative pain. The exact role of late-presenting infection with regard to the laminar erosion and rod migration seen in this case remains to be elucidated. However, the authors believe the primary cause of bony erosion was mechanical in origin. Regardless, most spine surgeons will treat many patients who have had posterior spinal implants and will perform hardware removal on a significant number of these patients during their careers. A full-thickness laminar erosion exposes the spinal cord to traumatic injury during hardware removal and debridement. This case is presented as a cautionary note to help surgeons become cognizant of a potentially devastating complication.
Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay
2013-12-01
Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.
Estimated Probability of a Cervical Spine Injury During an ISS Mission
NASA Technical Reports Server (NTRS)
Brooker, John E.; Weaver, Aaron S.; Myers, Jerry G.
2013-01-01
Introduction: The Integrated Medical Model (IMM) utilizes historical data, cohort data, and external simulations as input factors to provide estimates of crew health, resource utilization and mission outcomes. The Cervical Spine Injury Module (CSIM) is an external simulation designed to provide the IMM with parameter estimates for 1) a probability distribution function (PDF) of the incidence rate, 2) the mean incidence rate, and 3) the standard deviation associated with the mean resulting from injury/trauma of the neck. Methods: An injury mechanism based on an idealized low-velocity blunt impact to the superior posterior thorax of an ISS crewmember was used as the simulated mission environment. As a result of this impact, the cervical spine is inertially loaded from the mass of the head producing an extension-flexion motion deforming the soft tissues of the neck. A multibody biomechanical model was developed to estimate the kinematic and dynamic response of the head-neck system from a prescribed acceleration profile. Logistic regression was performed on a dataset containing AIS1 soft tissue neck injuries from rear-end automobile collisions with published Neck Injury Criterion values producing an injury transfer function (ITF). An injury event scenario (IES) was constructed such that crew 1 is moving through a primary or standard translation path transferring large volume equipment impacting stationary crew 2. The incidence rate for this IES was estimated from in-flight data and used to calculate the probability of occurrence. The uncertainty in the model input factors were estimated from representative datasets and expressed in terms of probability distributions. A Monte Carlo Method utilizing simple random sampling was employed to propagate both aleatory and epistemic uncertain factors. Scatterplots and partial correlation coefficients (PCC) were generated to determine input factor sensitivity. CSIM was developed in the SimMechanics/Simulink environment with a Monte Carlo wrapper (MATLAB) used to integrate the components of the module. Results: The probability of generating an AIS1 soft tissue neck injury from the extension/flexion motion induced by a low-velocity blunt impact to the superior posterior thorax was fitted with a lognormal PDF with mean 0.26409, standard deviation 0.11353, standard error of mean 0.00114, and 95% confidence interval [0.26186, 0.26631]. Combining the probability of an AIS1 injury with the probability of IES occurrence was fitted with a Johnson SI PDF with mean 0.02772, standard deviation 0.02012, standard error of mean 0.00020, and 95% confidence interval [0.02733, 0.02812]. The input factor sensitivity analysis in descending order was IES incidence rate, ITF regression coefficient 1, impactor initial velocity, ITF regression coefficient 2, and all others (equipment mass, crew 1 body mass, crew 2 body mass) insignificant. Verification and Validation (V&V): The IMM V&V, based upon NASA STD 7009, was implemented which included an assessment of the data sets used to build CSIM. The documentation maintained includes source code comments and a technical report. The software code and documentation is under Subversion configuration management. Kinematic validation was performed by comparing the biomechanical model output to established corridors.
Aphasia in border-zone infarcts has a specific initial pattern and good long-term prognosis.
Flamand-Roze, C; Cauquil-Michon, C; Roze, E; Souillard-Scemama, R; Maintigneux, L; Ducreux, D; Adams, D; Denier, C
2011-12-01
While border-zone infarcts (BZI) account for about 10% of strokes, studies on related aphasia are infrequent. The aim of this work was to redefine specifically their early clinical pattern and evolution. We prospectively studied consecutive patients referred to our stroke unit within a 2-year period. Cases of aphasia in right-handed patients associated with a MRI confirmed left-sided hemispheric BZI were included. These patients had a standardized language examination in the first 48 h, at discharge from stroke unit and between 6 and 18 months later. Eight patients were included. Three had anterior (MCA/ACA), two posterior (MCA/PCA), two both anterior and posterior, and one bilateral BZI. All our patients initially presented transcortical mixed aphasia, characterized by comprehension and naming difficulties associated with preserved repetition. In all patients, aphasia rapidly improved. It fully recovered within a few days in three patients. Initial improvement was marked, although incomplete in the five remaining patients: their aphasias specifically evolved according to the stroke location toward transcortical motor aphasia for the three patients with anterior BZI and transcortical sensory aphasia for the two patients with posterior BZI. All patients made a full language recovery within 18 months after stroke. We report a specific aphasic pattern associated with hemispheric BZI, including an excellent long-term outcome. These findings appear relevant to (i) clinically suspect BZI and (ii) plan rehabilitation and inform the patient and his family of likelihood of full language recovery. © 2011 The Author(s). European Journal of Neurology © 2011 EFNS.
Martin, Raul
2018-01-01
Current corneal assessment technologies make the process of corneal evaluation extremely fast and simple and several devices and technologies allow to explore and to manage patients. The purpose of this special issue is to present and also to update in the evaluation of cornea and ocular surface and this second part, reviews a description of the corneal topography and tomography techniques, providing updated information of the clinical recommendations of these techniques in eye care practice. Placido-based topographers started an exciting anterior corneal surface analysis that allows the development of current corneal tomographers that provide a full three-dimensional reconstruction of the cornea including elevation, curvature, and pachymetry data of anterior and posterior corneal surfaces. Although, there is not an accepted reference standard technology for corneal topography description and it is not possible to determine which device produces the most accurate topographic measurements, placido-based topographers are a valuable technology to be used in primary eye care and corneal tomograhers expanding the possibilities to explore cornea and anterior eye facilitating diagnosis and follow-up in several situations, raising patient follow-up, and improving the knowledge regarding to the corneal anatomy. Main disadvantages of placido-based topographers include the absence of information about the posterior corneal surface and limited corneal surface coverage without data from the para-central and/or peripheral corneal surface. However, corneal tomographers show repeatable anterior and posterior corneal surfaces measurements, providing full corneal thickness data improving cornea, and anterior surface assessment. However, differences between devices suggest that they are not interchangeable in clinical practice. PMID:29480244
Martin, Raul
2018-03-01
Current corneal assessment technologies make the process of corneal evaluation extremely fast and simple and several devices and technologies allow to explore and to manage patients. The purpose of this special issue is to present and also to update in the evaluation of cornea and ocular surface and this second part, reviews a description of the corneal topography and tomography techniques, providing updated information of the clinical recommendations of these techniques in eye care practice. Placido-based topographers started an exciting anterior corneal surface analysis that allows the development of current corneal tomographers that provide a full three-dimensional reconstruction of the cornea including elevation, curvature, and pachymetry data of anterior and posterior corneal surfaces. Although, there is not an accepted reference standard technology for corneal topography description and it is not possible to determine which device produces the most accurate topographic measurements, placido-based topographers are a valuable technology to be used in primary eye care and corneal tomograhers expanding the possibilities to explore cornea and anterior eye facilitating diagnosis and follow-up in several situations, raising patient follow-up, and improving the knowledge regarding to the corneal anatomy. Main disadvantages of placido-based topographers include the absence of information about the posterior corneal surface and limited corneal surface coverage without data from the para-central and/or peripheral corneal surface. However, corneal tomographers show repeatable anterior and posterior corneal surfaces measurements, providing full corneal thickness data improving cornea, and anterior surface assessment. However, differences between devices suggest that they are not interchangeable in clinical practice.
Target intersection probabilities for parallel-line and continuous-grid types of search
McCammon, R.B.
1977-01-01
The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.
Laptook, Abbot R; Shankaran, Seetha; Tyson, Jon E; Munoz, Breda; Bell, Edward F; Goldberg, Ronald N; Parikh, Nehal A; Ambalavanan, Namasivayam; Pedroza, Claudia; Pappas, Athina; Das, Abhik; Chaudhary, Aasma S; Ehrenkranz, Richard A; Hensman, Angelita M; Van Meurs, Krisa P; Chalak, Lina F; Khan, Amir M; Hamrick, Shannon E G; Sokol, Gregory M; Walsh, Michele C; Poindexter, Brenda B; Faix, Roger G; Watterberg, Kristi L; Frantz, Ivan D; Guillet, Ronnie; Devaskar, Uday; Truog, William E; Chock, Valerie Y; Wyckoff, Myra H; McGowan, Elisabeth C; Carlton, David P; Harmon, Heidi M; Brumbaugh, Jane E; Cotten, C Michael; Sánchez, Pablo J; Hibbs, Anna Maria; Higgins, Rosemary D
2017-10-24
Hypothermia initiated at less than 6 hours after birth reduces death or disability for infants with hypoxic-ischemic encephalopathy at 36 weeks' or later gestation. To our knowledge, hypothermia trials have not been performed in infants presenting after 6 hours. To estimate the probability that hypothermia initiated at 6 to 24 hours after birth reduces the risk of death or disability at 18 months among infants with hypoxic-ischemic encephalopathy. A randomized clinical trial was conducted between April 2008 and June 2016 among infants at 36 weeks' or later gestation with moderate or severe hypoxic-ischemic encephalopathy enrolled at 6 to 24 hours after birth. Twenty-one US Neonatal Research Network centers participated. Bayesian analyses were prespecified given the anticipated limited sample size. Targeted esophageal temperature was used in 168 infants. Eighty-three hypothermic infants were maintained at 33.5°C (acceptable range, 33°C-34°C) for 96 hours and then rewarmed. Eighty-five noncooled infants were maintained at 37.0°C (acceptable range, 36.5°C-37.3°C). The composite of death or disability (moderate or severe) at 18 to 22 months adjusted for level of encephalopathy and age at randomization. Hypothermic and noncooled infants were term (mean [SD], 39 [2] and 39 [1] weeks' gestation, respectively), and 47 of 83 (57%) and 55 of 85 (65%) were male, respectively. Both groups were acidemic at birth, predominantly transferred to the treating center with moderate encephalopathy, and were randomized at a mean (SD) of 16 (5) and 15 (5) hours for hypothermic and noncooled groups, respectively. The primary outcome occurred in 19 of 78 hypothermic infants (24.4%) and 22 of 79 noncooled infants (27.9%) (absolute difference, 3.5%; 95% CI, -1% to 17%). Bayesian analysis using a neutral prior indicated a 76% posterior probability of reduced death or disability with hypothermia relative to the noncooled group (adjusted posterior risk ratio, 0.86; 95% credible interval, 0.58-1.29). The probability that death or disability in cooled infants was at least 1%, 2%, or 3% less than noncooled infants was 71%, 64%, and 56%, respectively. Among term infants with hypoxic-ischemic encephalopathy, hypothermia initiated at 6 to 24 hours after birth compared with noncooling resulted in a 76% probability of any reduction in death or disability, and a 64% probability of at least 2% less death or disability at 18 to 22 months. Hypothermia initiated at 6 to 24 hours after birth may have benefit but there is uncertainty in its effectiveness. clinicaltrials.gov Identifier: NCT00614744.
Laptook, Abbot R.; Shankaran, Seetha; Tyson, Jon E.; Munoz, Breda; Bell, Edward F.; Goldberg, Ronald N.; Parikh, Nehal A.; Ambalavanan, Namasivayam; Pedroza, Claudia; Pappas, Athina; Das, Abhik; Chaudhary, Aasma S.; Ehrenkranz, Richard A.; Hensman, Angelita M.; Van Meurs, Krisa P.; Chalak, Lina F.; Hamrick, Shannon E. G.; Sokol, Gregory M.; Walsh, Michele C.; Poindexter, Brenda B.; Faix, Roger G.; Watterberg, Kristi L.; Frantz, Ivan D.; Guillet, Ronnie; Devaskar, Uday; Truog, William E.; Chock, Valerie Y.; Wyckoff, Myra H.; McGowan, Elisabeth C.; Carlton, David P.; Harmon, Heidi M.; Brumbaugh, Jane E.; Cotten, C. Michael; Sánchez, Pablo J.; Hibbs, Anna Maria; Higgins, Rosemary D.
2018-01-01
IMPORTANCE Hypothermia initiated at less than 6 hours after birth reduces death or disability for infants with hypoxic-ischemic encephalopathy at 36 weeks’ or later gestation. To our knowledge, hypothermia trials have not been performed in infants presenting after 6 hours. OBJECTIVE To estimate the probability that hypothermia initiated at 6 to 24 hours after birth reduces the risk of death or disability at 18 months among infants with hypoxic-ischemic encephalopathy. DESIGN, SETTING, AND PARTICIPANTS A randomized clinical trial was conducted between April 2008 and June 2016 among infants at 36 weeks’ or later gestation with moderate or severe hypoxic-ischemic encephalopathy enrolled at 6 to 24 hours after birth. Twenty-one US Neonatal Research Network centers participated. Bayesian analyses were prespecified given the anticipated limited sample size. INTERVENTIONS Targeted esophageal temperature was used in 168 infants. Eighty-three hypothermic infants were maintained at 33.5°C (acceptable range, 33°C–34°C) for 96 hours and then rewarmed. Eighty-five noncooled infants were maintained at 37.0°C (acceptable range, 36.5°C–37.3°C). MAIN OUTCOMES AND MEASURES The composite of death or disability (moderate or severe) at 18 to 22 months adjusted for level of encephalopathy and age at randomization. RESULTS Hypothermic and noncooled infants were term (mean [SD], 39 [2] and 39 [1] weeks’ gestation, respectively), and 47 of 83 (57%) and 55 of 85 (65%) were male, respectively. Both groups were acidemic at birth, predominantly transferred to the treating center with moderate encephalopathy, and were randomized at a mean (SD) of 16 (5) and 15 (5) hours for hypothermic and noncooled groups, respectively. The primary outcome occurred in 19 of 78 hypothermic infants (24.4%) and 22 of 79 noncooled infants (27.9%) (absolute difference, 3.5%; 95% CI, −1% to 17%). Bayesian analysis using a neutral prior indicated a 76% posterior probability of reduced death or disability with hypothermia relative to the noncooled group (adjusted posterior risk ratio, 0.86; 95% credible interval, 0.58–1.29). The probability that death or disability in cooled infants was at least 1%, 2%, or 3% less than noncooled infants was 71%, 64%, and 56%, respectively. CONCLUSIONS AND RELEVANCE Among term infants with hypoxic-ischemic encephalopathy, hypothermia initiated at 6 to 24 hours after birth compared with noncooling resulted in a 76% probability of any reduction in death or disability, and a 64% probability of at least 2% less death or disability at 18 to 22 months. Hypothermia initiated at 6 to 24 hours after birth may have benefit but there is uncertainty in its effectiveness. TRIAL REGISTRATION clinicaltrials.gov Identifier: NCT00614744 PMID:29067428
Chen, Xiaoxian; Xia, Bin; Ge, Lihong
2015-04-21
Early transition from breastfeeding and non-nutritive sucking habits may be related to occlusofacial abnormalities as environmental factors. Previous studies have not taken into account the potential for interactions between feeding practice, non-nutritive sucking habits and occlusal traits. This study assessed the effects of breast-feeding duration, bottle-feeding duration and non-nutritive sucking habits on the occlusal characteristics of primary dentition in 3-6-year-old children in Peking city. This cross sectional study was conducted via an examination of the occlusal characteristics of 734 children combined with a questionnaire completed by their parents/guardians. The examination was performed by a single, previously calibrated examiner and the following variables were evaluated: presence or absence of deep overbite, open bite, anterior crossbite, posterior crossbite, deep overjet, terminal plane relationship of the second primary molar, primary canine relationship, crowding and spacing. Univariate analysis and multiple logistic regressions were applied to analyze the associations. It was found that a short duration of breast-feeding (never or ≤ 6 months) was directly associated with posterior cross bite (OR = 3.13; 95% CI = 1.11-8.82; P = 0.031) and no maxillary space (OR = 1.63; 95% CI = 1.23-2.98; P = 0.038). In children breast-fed for ≤ 6 months, the probability of developing pacifier-sucking habits was 4 times that for those breast-fed for >6 months (OR = 4.21; 95% CI = 1.85-9.60; P = 0.0002). Children who were bottle-fed for over 18 months had a 1.45-fold higher risk of nonmesial step occlusion and a 1.43-fold higher risk of a class II canine relationship compared with those who were bottle-fed for up to 18 months. Non-nutritive sucking habits were also found to affect occlusion: A prolonged digit-sucking habit increased the probability of an anterior open bite, while a pacifier-sucking habit associated with excessive overjet and absence of lower arch developmental space. Breastfeeding duration was shown to be associated with the prevalence of posterior crossbite, no maxillary space in the deciduous dentition and development of a pacifier-sucking habit. Children who had a digit-sucking habit were more likely to develop an open bite.
Rice, J P; Saccone, N L; Corbett, J
2001-01-01
The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.
Piéron’s Law and Optimal Behavior in Perceptual Decision-Making
van Maanen, Leendert; Grasman, Raoul P. P. P.; Forstmann, Birte U.; Wagenmakers, Eric-Jan
2012-01-01
Piéron’s Law is a psychophysical regularity in signal detection tasks that states that mean response times decrease as a power function of stimulus intensity. In this article, we extend Piéron’s Law to perceptual two-choice decision-making tasks, and demonstrate that the law holds as the discriminability between two competing choices is manipulated, even though the stimulus intensity remains constant. This result is consistent with predictions from a Bayesian ideal observer model. The model assumes that in order to respond optimally in a two-choice decision-making task, participants continually update the posterior probability of each response alternative, until the probability of one alternative crosses a criterion value. In addition to predictions for two-choice decision-making tasks, we extend the ideal observer model to predict Piéron’s Law in signal detection tasks. We conclude that Piéron’s Law is a general phenomenon that may be caused by optimality constraints. PMID:22232572
Tomaiuolo, F; MacDonald, J D; Caramanos, Z; Posner, G; Chiavaras, M; Evans, A C; Petrides, M
1999-09-01
The pars opercularis occupies the posterior part of the inferior frontal gyrus. Electrical stimulation or damage of this region interferes with language production. The present study investigated the morphology and morphometry of the pars opercularis in 108 normal adult human cerebral hemispheres by means of magnetic resonance imaging. The brain images were transformed into a standardized proportional steoreotaxic space (i.e. that of Talairach and Tournoux) in order to minimize interindividual brain size variability. There was considerable variability in the shape and location of the pars opercularis across brains and between cerebral hemispheres. There was no significant difference or correlation between left and right hemisphere grey matter volumes. There was also no significant difference between sex and side of asymmetry of the pars opercularis. A probability map of the pars opercularis was constructed by averaging its location and extent in each individual normalized brain into Talairach space to aid in localization of activity changes in functional neuroimaging studies.
A method of real-time fault diagnosis for power transformers based on vibration analysis
NASA Astrophysics Data System (ADS)
Hong, Kaixing; Huang, Hai; Zhou, Jianping; Shen, Yimin; Li, Yujie
2015-11-01
In this paper, a novel probability-based classification model is proposed for real-time fault detection of power transformers. First, the transformer vibration principle is introduced, and two effective feature extraction techniques are presented. Next, the details of the classification model based on support vector machine (SVM) are shown. The model also includes a binary decision tree (BDT) which divides transformers into different classes according to health state. The trained model produces posterior probabilities of membership to each predefined class for a tested vibration sample. During the experiments, the vibrations of transformers under different conditions are acquired, and the corresponding feature vectors are used to train the SVM classifiers. The effectiveness of this model is illustrated experimentally on typical in-service transformers. The consistency between the results of the proposed model and the actual condition of the test transformers indicates that the model can be used as a reliable method for transformer fault detection.
Risk forewarning model for rice grain Cd pollution based on Bayes theory.
Wu, Bo; Guo, Shuhai; Zhang, Lingyan; Li, Fengmei
2018-03-15
Cadmium (Cd) pollution of rice grain caused by Cd-contaminated soils is a common problem in southwest and central south China. In this study, utilizing the advantages of the Bayes classification statistical method, we established a risk forewarning model for rice grain Cd pollution, and put forward two parameters (the prior probability factor and data variability factor). The sensitivity analysis of the model parameters illustrated that sample size and standard deviation influenced the accuracy and applicable range of the model. The accuracy of the model was improved by the self-renewal of the model through adding the posterior data into the priori data. Furthermore, this method can be used to predict the risk probability of rice grain Cd pollution under similar soil environment, tillage and rice varietal conditions. The Bayes approach thus represents a feasible method for risk forewarning of heavy metals pollution of agricultural products caused by contaminated soils. Copyright © 2017 Elsevier B.V. All rights reserved.
A Bayesian approach to microwave precipitation profile retrieval
NASA Technical Reports Server (NTRS)
Evans, K. Franklin; Turk, Joseph; Wong, Takmeng; Stephens, Graeme L.
1995-01-01
A multichannel passive microwave precipitation retrieval algorithm is developed. Bayes theorem is used to combine statistical information from numerical cloud models with forward radiative transfer modeling. A multivariate lognormal prior probability distribution contains the covariance information about hydrometeor distribution that resolves the nonuniqueness inherent in the inversion process. Hydrometeor profiles are retrieved by maximizing the posterior probability density for each vector of observations. The hydrometeor profile retrieval method is tested with data from the Advanced Microwave Precipitation Radiometer (10, 19, 37, and 85 GHz) of convection over ocean and land in Florida. The CP-2 multiparameter radar data are used to verify the retrieved profiles. The results show that the method can retrieve approximate hydrometeor profiles, with larger errors over land than water. There is considerably greater accuracy in the retrieval of integrated hydrometeor contents than of profiles. Many of the retrieval errors are traced to problems with the cloud model microphysical information, and future improvements to the algorithm are suggested.
The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-03-01
Given sparse or low-quality radial velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and Markov chain Monte Carlo (MCMC) posterior sampling over the orbital parameters. Here we create a custom Monte Carlo sampler for sparse or noisy radial velocity measurements of two-body systems that can produce posterior samples for orbital parameters even when the likelihood function is poorly behaved. The six standard orbital parameters for a binary system can be split into four nonlinear parameters (period, eccentricity, argument of pericenter, phase) and two linear parameters (velocity amplitude, barycenter velocity). We capitalize on this by building a sampling method in which we densely sample the prior probability density function (pdf) in the nonlinear parameters and perform rejection sampling using a likelihood function marginalized over the linear parameters. With sparse or uninformative data, the sampling obtained by this rejection sampling is generally multimodal and dense. With informative data, the sampling becomes effectively unimodal but too sparse: in these cases we follow the rejection sampling with standard MCMC. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still informative and can be used in hierarchical (population) modeling. We give some examples that show how the posterior pdf depends sensitively on the number and time coverage of the observations and their uncertainties.
NASA Astrophysics Data System (ADS)
Hermans, Thomas; Nguyen, Frédéric; Caers, Jef
2015-07-01
In inverse problems, investigating uncertainty in the posterior distribution of model parameters is as important as matching data. In recent years, most efforts have focused on techniques to sample the posterior distribution with reasonable computational costs. Within a Bayesian context, this posterior depends on the prior distribution. However, most of the studies ignore modeling the prior with realistic geological uncertainty. In this paper, we propose a workflow inspired by a Popper-Bayes philosophy that data should first be used to falsify models, then only be considered for matching. We propose a workflow consisting of three steps: (1) in defining the prior, we interpret multiple alternative geological scenarios from literature (architecture of facies) and site-specific data (proportions of facies). Prior spatial uncertainty is modeled using multiple-point geostatistics, where each scenario is defined using a training image. (2) We validate these prior geological scenarios by simulating electrical resistivity tomography (ERT) data on realizations of each scenario and comparing them to field ERT in a lower dimensional space. In this second step, the idea is to probabilistically falsify scenarios with ERT, meaning that scenarios which are incompatible receive an updated probability of zero while compatible scenarios receive a nonzero updated belief. (3) We constrain the hydrogeological model with hydraulic head and ERT using a stochastic search method. The workflow is applied to a synthetic and a field case studies in an alluvial aquifer. This study highlights the importance of considering and estimating prior uncertainty (without data) through a process of probabilistic falsification.
Werner, Liliana; Pandey, Suresh K; Izak, Andrea M; Vargas, Luis G; Trivedi, Rupal H; Apple, David J; Mamalis, Nick
2004-05-01
To evaluate the development of capsular bag opacification in rabbit eyes after implantation of an intraocular lens (IOL) designed to minimize contact between the anterior capsule and the IOL and ensure expansion of the capsular bag. David J. Apple, MD Laboratories for Ophthalmic Devices Research, John A. Moran Eye Center, University of Utah, Salt Lake City, Utah, USA. Ten New Zealand white rabbits had a study IOL (new accommodating silicone IOL [Synchrony, Visiogen, Inc.]) implanted in 1 eye and a control IOL (1-piece plate silicone IOL with large fixation holes) implanted in the other eye. Intraocular lens position, anterior capsule opacification (ACO), and posterior capsule opacification (PCO) were qualitatively assessed using slitlamp retroillumination photographs of the dilated eyes. Anterior capsule opacification and PCO were graded on a 0 to 4 scale after the eyes were enucleated (Miyake-Apple posterior and anterior views after excision of the cornea and iris). The eyes were also evaluated histopathologically. The rate of ACO and PCO was significantly higher in the control group. Fibrosis and ACO were almost absent in the study group; the control group exhibited extensive capsulorhexis contraction, including capsulorhexis occlusion. Postoperative IOL dislocation into the anterior chamber and pupillary block syndrome were observed in some eyes in the study group. The special design features associated with the study IOL appeared to help prevent PCO. Complications in the study group were probably caused by the increased posterior vitreous pressure in rabbit eyes compared to human eyes and the relatively large size of the study IOL relative to the anterior segment of rabbit eyes.
Ammar, El-Desouky; Hentz, Matthew; Hall, David G.; Shatters, Robert G.
2015-01-01
The melaleuca psyllid, Boreioglycaspis melaleucae (Hemiptera: Psyllidae), was introduced to Florida as a biological control agent against Melaleuca quinquenervia, an invasive evergreen tree that has invaded large areas of Florida Everglades. Colonies of B. melaleucae nymphs are normally covered by white waxy secretions, and nymphs of various instars produce long bundles of white waxy filaments extending laterally and posteriorly from their abdomen. Scanning electron microscopy of ‘naturally waxed’ and ‘dewaxed’ nymphs (cleaned from wax) revealed two types of wax pore plates located dorsally and laterally on the integument of posterior abdominal segments starting with the 4th segment. Type-1 wax pore plates, with raised rim, peripheral groove, slits and pits, produce long ribbons and filaments of waxy secretions that are wound together forming long wax bundles, whereas type-2 wax pore plates, with slits only, produce shorter wax curls. Additionally, in both nymphs and adult females, the circumanal ring contained ornate rows of wax pores that produce wax filaments covering their honeydew excretions. Video recordings with stereomicroscopy showed that adult females produce whitish honeydew balls, powerfully propelled away from their body, probably to get these sticky excretions away from their eggs and newly hatched nymphs. Adult males, however, produce clear droplets of honeydew immediately behind them, simply by bending the posterior end of the abdomen downward. The possible role(s) of waxy secretions by nymphs and adults of B. melaleucae in reducing contamination of their colonies with honeydew, among other possibilities, are discussed. PMID:25793934
Pinacho-Pinacho, Carlos Daniel; Sereno-Uribe, Ana L; García-Varela, Martín
2014-12-01
Neoechinorhynchus (Neoechinorhynchus) mexicoensis sp. n. is described from the intestine of Dormitator maculatus (Bloch 1792) collected in 5 coastal localities from the Gulf of Mexico. The new species is mainly distinguished from the other 33 described species of Neoechinorhynchus from the Americas associated with freshwater, marine and brackish fishes by having smaller middle and posterior hooks and possessing a small proboscis with three rows of six hooks each, apical hooks longer than other hooks and extending to the same level as the posterior hooks, 1 giant nucleus in the ventral body wall and females with eggs longer than other congeneric species. Sequences of the internal transcribed spacer (ITS) and the large subunit (LSU) of ribosomal DNA including the domain D2+D3 were used independently to corroborate the morphological distinction among the new species and other congeneric species associated with freshwater and brackish water fish from Mexico. The genetic divergence estimated among congeneric species ranged from 7.34 to 44% for ITS and from 1.65 to 32.9% for LSU. Maximum likelihood and Bayesian inference analyses with each dataset showed that the 25 specimens analyzed from 5 localities of the coast of the Gulf of Mexico parasitizing D. maculatus represent an independent clade with strong bootstrap support and posterior probabilities. The morphological evidence, plus the monophyly in the phylogenetic analyses, indicates that the acanthocephalans collected from intestine of D. maculatus from the Gulf of Mexico represent a new species, herein named N. (N.) mexicoensis sp. n. Copyright © 2014. Published by Elsevier Ireland Ltd.
NASA Astrophysics Data System (ADS)
Sheet, Debdoot; Karamalis, Athanasios; Kraft, Silvan; Noël, Peter B.; Vag, Tibor; Sadhu, Anup; Katouzian, Amin; Navab, Nassir; Chatterjee, Jyotirmoy; Ray, Ajoy K.
2013-03-01
Breast cancer is the most common form of cancer in women. Early diagnosis can significantly improve lifeexpectancy and allow different treatment options. Clinicians favor 2D ultrasonography for breast tissue abnormality screening due to high sensitivity and specificity compared to competing technologies. However, inter- and intra-observer variability in visual assessment and reporting of lesions often handicaps its performance. Existing Computer Assisted Diagnosis (CAD) systems though being able to detect solid lesions are often restricted in performance. These restrictions are inability to (1) detect lesion of multiple sizes and shapes, and (2) differentiate between hypo-echoic lesions from their posterior acoustic shadowing. In this work we present a completely automatic system for detection and segmentation of breast lesions in 2D ultrasound images. We employ random forests for learning of tissue specific primal to discriminate breast lesions from surrounding normal tissues. This enables it to detect lesions of multiple shapes and sizes, as well as discriminate between hypo-echoic lesion from associated posterior acoustic shadowing. The primal comprises of (i) multiscale estimated ultrasonic statistical physics and (ii) scale-space characteristics. The random forest learns lesion vs. background primal from a database of 2D ultrasound images with labeled lesions. For segmentation, the posterior probabilities of lesion pixels estimated by the learnt random forest are hard thresholded to provide a random walks segmentation stage with starting seeds. Our method achieves detection with 99.19% accuracy and segmentation with mean contour-to-contour error < 3 pixels on a set of 40 images with 49 lesions.
Ho, Lam Si Tung; Xu, Jason; Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A
2018-03-01
Birth-death processes track the size of a univariate population, but many biological systems involve interaction between populations, necessitating models for two or more populations simultaneously. A lack of efficient methods for evaluating finite-time transition probabilities of bivariate processes, however, has restricted statistical inference in these models. Researchers rely on computationally expensive methods such as matrix exponentiation or Monte Carlo approximation, restricting likelihood-based inference to small systems, or indirect methods such as approximate Bayesian computation. In this paper, we introduce the birth/birth-death process, a tractable bivariate extension of the birth-death process, where rates are allowed to be nonlinear. We develop an efficient algorithm to calculate its transition probabilities using a continued fraction representation of their Laplace transforms. Next, we identify several exemplary models arising in molecular epidemiology, macro-parasite evolution, and infectious disease modeling that fall within this class, and demonstrate advantages of our proposed method over existing approaches to inference in these models. Notably, the ubiquitous stochastic susceptible-infectious-removed (SIR) model falls within this class, and we emphasize that computable transition probabilities newly enable direct inference of parameters in the SIR model. We also propose a very fast method for approximating the transition probabilities under the SIR model via a novel branching process simplification, and compare it to the continued fraction representation method with application to the 17th century plague in Eyam. Although the two methods produce similar maximum a posteriori estimates, the branching process approximation fails to capture the correlation structure in the joint posterior distribution.
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Mucosal Melanoma Originating From the Eustachian Tube.
Kim, Jeong Hong; Lim, Gil-Chai; Kang, Ju Wan
2017-11-01
A 77-year-old man was referred with a 4-month history of hearing impairment and ear fullness of the left ear. Otoscopic examination revealed an effusion in the left middle ear, and nasal endoscopic examination revealed a dark polypoid lesion at the opening of the left Eustachian tube. In addition to the lesion of the Eustachian tube, a dark mucosal lesion was seen at the posterior choana and the posterior end of the nasal septum. Endoscopic biopsy was done and pathologic result was consistent with malignant melanoma. Wide surgical excision with postoperative radiotherapy was performed; multiple metastases were detected 4 months after the treatment. Mucosal melanoma originating from nasopharynx was extremely rare, but careful examination of nasopharyngeal area should be considered when the patient presents with unilateral middle ear effusion, especially in older age.
Coulier, Bruno; Gogoase, Monica; Ramboux, Adrien; Pierard, Frederic
2012-12-01
Extra-abdominal abscesses of gastrointestinal origin developing within the lumbar subcutaneous tissues are extremely rare. We report two cases of retroperitoneal bowel perforation presenting spontaneously at admission with a lumbar abscess trespassing the lumbar triangle of Petit, a classical "locus of minus resistencia" of the posterior abdominal wall. The first case was caused by perforation of a retrocecal appendicitis--being concomitantly responsible of a necrotizing fasciitis of the thigh--and in the second case perforation was caused by left colonic diverticulitis. In both cases, the full diagnosis was made with abdominal CT. The patients were threatened by a two-step surgical approach comprising a direct posterior percutaneous drainage of the abscess followed by classical laparotomy.
Bodin, Frédéric; Dissaux, Caroline; Steib, Jean-Paul; Massard, Gilbert
2016-03-01
Radical resection of an extended malignant sarcoma of the chest wall requires full-thickness thoracic chest wall reconstruction. Reconstruction is tedious in the case of posteriorly located tumours, because the ipsilateral pedicled myocutaneous latissimus dorsi flap is involved and hence not usable for soft tissue coverage. We report an original case of a left giant dorsal chondrosarcoma originating from the 11th costovertebral joint. After extended resection and skeletal reconstruction, soft tissue coverage was achieved with an original contralateral free flap encompassing both latissimus dorsi and serratus anterior muscles. The flap pedicle was anastomosed to the ipsilateral thoracodorsal vessels. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
NASA Astrophysics Data System (ADS)
de Wit, Ralph W. L.; Valentine, Andrew P.; Trampert, Jeannot
2013-10-01
How do body-wave traveltimes constrain the Earth's radial (1-D) seismic structure? Existing 1-D seismological models underpin 3-D seismic tomography and earthquake location algorithms. It is therefore crucial to assess the quality of such 1-D models, yet quantifying uncertainties in seismological models is challenging and thus often ignored. Ideally, quality assessment should be an integral part of the inverse method. Our aim in this study is twofold: (i) we show how to solve a general Bayesian non-linear inverse problem and quantify model uncertainties, and (ii) we investigate the constraint on spherically symmetric P-wave velocity (VP) structure provided by body-wave traveltimes from the EHB bulletin (phases Pn, P, PP and PKP). Our approach is based on artificial neural networks, which are very common in pattern recognition problems and can be used to approximate an arbitrary function. We use a Mixture Density Network to obtain 1-D marginal posterior probability density functions (pdfs), which provide a quantitative description of our knowledge on the individual Earth parameters. No linearization or model damping is required, which allows us to infer a model which is constrained purely by the data. We present 1-D marginal posterior pdfs for the 22 VP parameters and seven discontinuity depths in our model. P-wave velocities in the inner core, outer core and lower mantle are resolved well, with standard deviations of ˜0.2 to 1 per cent with respect to the mean of the posterior pdfs. The maximum likelihoods of VP are in general similar to the corresponding ak135 values, which lie within one or two standard deviations from the posterior means, thus providing an independent validation of ak135 in this part of the radial model. Conversely, the data contain little or no information on P-wave velocity in the D'' layer, the upper mantle and the homogeneous crustal layers. Further, the data do not constrain the depth of the discontinuities in our model. Using additional phases available in the ISC bulletin, such as PcP, PKKP and the converted phases SP and ScP, may enhance the resolvability of these parameters. Finally, we show how the method can be extended to obtain a posterior pdf for a multidimensional model space. This enables us to investigate correlations between model parameters.
Probabilistic combination of static and dynamic gait features for verification
NASA Astrophysics Data System (ADS)
Bazin, Alex I.; Nixon, Mark S.
2005-03-01
This paper describes a novel probabilistic framework for biometric identification and data fusion. Based on intra and inter-class variation extracted from training data, posterior probabilities describing the similarity between two feature vectors may be directly calculated from the data using the logistic function and Bayes rule. Using a large publicly available database we show the two imbalanced gait modalities may be fused using this framework. All fusion methods tested provide an improvement over the best modality, with the weighted sum rule giving the best performance, hence showing that highly imbalanced classifiers may be fused in a probabilistic setting; improving not only the performance, but also generalized application capability.
Space-based sensor management and geostationary satellites tracking
NASA Astrophysics Data System (ADS)
El-Fallah, A.; Zatezalo, A.; Mahler, R.; Mehra, R. K.; Donatelli, D.
2007-04-01
Sensor management for space situational awareness presents a daunting theoretical and practical challenge as it requires the use of multiple types of sensors on a variety of platforms to ensure that the space environment is continuously monitored. We demonstrate a new approach utilizing the Posterior Expected Number of Targets (PENT) as the sensor management objective function, an observation model for a space-based EO/IR sensor platform, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker. Simulation and results using actual Geostationary Satellites are presented. We also demonstrate enhanced performance by applying the ProgressiveWeighting Correction (PWC) method for regularization in the implementation of the PHD-PF tracker.