Bayes factor and posterior probability: Complementary statistical evidence to p-value.
Lin, Ruitao; Yin, Guosheng
2015-09-01
As a convention, a p-value is often computed in hypothesis testing and compared with the nominal level of 0.05 to determine whether to reject the null hypothesis. Although the smaller the p-value, the more significant the statistical test, it is difficult to perceive the p-value in a probability scale and quantify it as the strength of the data against the null hypothesis. In contrast, the Bayesian posterior probability of the null hypothesis has an explicit interpretation of how strong the data support the null. We make a comparison of the p-value and the posterior probability by considering a recent clinical trial. The results show that even when we reject the null hypothesis, there is still a substantial probability (around 20%) that the null is true. Not only should we examine whether the data would have rarely occurred under the null hypothesis, but we also need to know whether the data would be rare under the alternative. As a result, the p-value only provides one side of the information, for which the Bayes factor and posterior probability may offer complementary evidence. Copyright © 2015 Elsevier Inc. All rights reserved.
Bayesian model checking: A comparison of tests
NASA Astrophysics Data System (ADS)
Lucy, L. B.
2018-06-01
Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.
Estimating the Probability of Traditional Copying, Conditional on Answer-Copying Statistics.
Allen, Jeff; Ghattas, Andrew
2016-06-01
Statistics for detecting copying on multiple-choice tests produce p values measuring the probability of a value at least as large as that observed, under the null hypothesis of no copying. The posterior probability of copying is arguably more relevant than the p value, but cannot be derived from Bayes' theorem unless the population probability of copying and probability distribution of the answer-copying statistic under copying are known. In this article, the authors develop an estimator for the posterior probability of copying that is based on estimable quantities and can be used with any answer-copying statistic. The performance of the estimator is evaluated via simulation, and the authors demonstrate how to apply the formula using actual data. Potential uses, generalizability to other types of cheating, and limitations of the approach are discussed.
Posterior probability of linkage and maximal lod score.
Génin, E; Martinez, M; Clerget-Darpoux, F
1995-01-01
To detect linkage between a trait and a marker, Morton (1955) proposed to calculate the lod score z(theta 1) at a given value theta 1 of the recombination fraction. If z(theta 1) reaches +3 then linkage is concluded. However, in practice, lod scores are calculated for different values of the recombination fraction between 0 and 0.5 and the test is based on the maximum value of the lod score Zmax. The impact of this deviation of the test on the probability that in fact linkage does not exist, when linkage was concluded, is documented here. This posterior probability of no linkage can be derived by using Bayes' theorem. It is less than 5% when the lod score at a predetermined theta 1 is used for the test. But, for a Zmax of +3, we showed that it can reach 16.4%. Thus, considering a composite alternative hypothesis instead of a single one decreases the reliability of the test. The reliability decreases rapidly when Zmax is less than +3. Given a Zmax of +2.5, there is a 33% chance that linkage does not exist. Moreover, the posterior probability depends not only on the value of Zmax but also jointly on the family structures and on the genetic model. For a given Zmax, the chance that linkage exists may then vary.
A Bayesian predictive two-stage design for phase II clinical trials.
Sambucini, Valeria
2008-04-15
In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.
Generative adversarial networks for brain lesion detection
NASA Astrophysics Data System (ADS)
Alex, Varghese; Safwan, K. P. Mohammed; Chennamsetty, Sai Saketh; Krishnamurthi, Ganapathy
2017-02-01
Manual segmentation of brain lesions from Magnetic Resonance Images (MRI) is cumbersome and introduces errors due to inter-rater variability. This paper introduces a semi-supervised technique for detection of brain lesion from MRI using Generative Adversarial Networks (GANs). GANs comprises of a Generator network and a Discriminator network which are trained simultaneously with the objective of one bettering the other. The networks were trained using non lesion patches (n=13,000) from 4 different MR sequences. The network was trained on BraTS dataset and patches were extracted from regions excluding tumor region. The Generator network generates data by modeling the underlying probability distribution of the training data, (PData). The Discriminator learns the posterior probability P (Label Data) by classifying training data and generated data as "Real" or "Fake" respectively. The Generator upon learning the joint distribution, produces images/patches such that the performance of the Discriminator on them are random, i.e. P (Label Data = GeneratedData) = 0.5. During testing, the Discriminator assigns posterior probability values close to 0.5 for patches from non lesion regions, while patches centered on lesion arise from a different distribution (PLesion) and hence are assigned lower posterior probability value by the Discriminator. On the test set (n=14), the proposed technique achieves whole tumor dice score of 0.69, sensitivity of 91% and specificity of 59%. Additionally the generator network was capable of generating non lesion patches from various MR sequences.
Seismic imaging of Q structures by a trans-dimensional coda-wave analysis
NASA Astrophysics Data System (ADS)
Takahashi, Tsutomu
2017-04-01
Wave scattering and intrinsic attenuation are important processes to describe incoherent and complex wave trains of high frequency seismic wave (>1Hz). The multiple lapse time window analysis (MLTWA) has been used to estimate scattering and intrinsic Q values by assuming constant Q in a study area (e.g., Hoshiba 1993). This study generalizes this MLTWA to estimate lateral variations of Q values under the Bayesian framework in dimension variable space. Study area is partitioned into small areas by means of the Voronoi tessellation. Scattering and intrinsic Q in each small area are constant. We define a misfit function for spatiotemporal variations of wave energy as with the original MLTWA, and maximize the posterior probability with changing not only Q values but the number and spatial layout of the Voronoi cells. This maximization is conducted by means of the reversible jump Markov chain Monte Carlo (rjMCMC) (Green 1995) since the number of unknown parameters (i.e., dimension of posterior probability) is variable. After a convergence to the maximum posterior, we estimate Q structures from the ensemble averages of MCMC samples around the maximum posterior probability. Synthetic tests showed stable reconstructions of input structures with reasonable error distributions. We applied this method for seismic waveform data recorded by ocean bottom seismograms at the outer-rise area off Tohoku, and estimated Q values at 4-8Hz, 8-16Hz and 16-32Hz. Intrinsic Q are nearly constant at all frequency bands, and scattering Q shows two distinct strong scattering regions at petit spot area and high seismicity area. These strong scattering are probably related to magma inclusions and fractured structure, respectively. Difference between these two areas becomes clear at high frequencies. It means that scale dependences of inhomogeneities or smaller scale inhomogeneity is important to discuss medium property and origins of structural variations. While the generalized MLTWA is based on a classical waveform modeling in constant Q medium, this method can be a fundamental basis for Q structure imaging in the crust.
Bayesian operational modal analysis with asynchronous data, Part II: Posterior uncertainty
NASA Astrophysics Data System (ADS)
Zhu, Yi-Chen; Au, Siu-Kui
2018-01-01
A Bayesian modal identification method has been proposed in the companion paper that allows the most probable values of modal parameters to be determined using asynchronous ambient vibration data. This paper investigates the identification uncertainty of modal parameters in terms of their posterior covariance matrix. Computational issues are addressed. Analytical expressions are derived to allow the posterior covariance matrix to be evaluated accurately and efficiently. Synthetic, laboratory and field data examples are presented to verify the consistency, investigate potential modelling error and demonstrate practical applications.
The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions
Larget, Bret
2013-01-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066
Posterior semicircular canal dehiscence: value of VEMP and multidetector CT.
Vanspauwen, R; Salembier, L; Van den Hauwe, L; Parizel, P; Wuyts, F L; Van de Heyning, P H
2006-01-01
To illustrate that posterior semicircular canal dehiscence can present similarly to superior semicircular canal dehiscence. The symptomatology initially presented as probable Menière's disease evolving into a mixed conductive hearing loss with a Carhart notch-type perceptive component suggestive of otosclerosis-type stapes fixation. A small hole stapedotomy resulted in a dead ear and a horizontal semicircular canal hypofunction. Recurrent incapacitating vertigo attacks developed. Vestibular evoked myogenic potential (VEMP) testing demonstrated intact vestibulocollic reflexes. Additional evaluation with high resolution multidetector computed tomography (MDCT) of the temporal bone showed a dehiscence of the left posterior semicircular canal. Besides superior semicircular canal dehiscence, posterior semicircular canal dehiscence has to be included in the differential diagnosis of atypical Menière's disease and/or low tone conductive hearing loss. The value of performing MDCT before otosclerosis-type surgery is stressed. VEMP might contribute to establishing the differential diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
La Russa, D
Purpose: The purpose of this project is to develop a robust method of parameter estimation for a Poisson-based TCP model using Bayesian inference. Methods: Bayesian inference was performed using the PyMC3 probabilistic programming framework written in Python. A Poisson-based TCP regression model that accounts for clonogen proliferation was fit to observed rates of local relapse as a function of equivalent dose in 2 Gy fractions for a population of 623 stage-I non-small-cell lung cancer patients. The Slice Markov Chain Monte Carlo sampling algorithm was used to sample the posterior distributions, and was initiated using the maximum of the posterior distributionsmore » found by optimization. The calculation of TCP with each sample step required integration over the free parameter α, which was performed using an adaptive 24-point Gauss-Legendre quadrature. Convergence was verified via inspection of the trace plot and posterior distribution for each of the fit parameters, as well as with comparisons of the most probable parameter values with their respective maximum likelihood estimates. Results: Posterior distributions for α, the standard deviation of α (σ), the average tumour cell-doubling time (Td), and the repopulation delay time (Tk), were generated assuming α/β = 10 Gy, and a fixed clonogen density of 10{sup 7} cm−{sup 3}. Posterior predictive plots generated from samples from these posterior distributions are in excellent agreement with the observed rates of local relapse used in the Bayesian inference. The most probable values of the model parameters also agree well with maximum likelihood estimates. Conclusion: A robust method of performing Bayesian inference of TCP data using a complex TCP model has been established.« less
The estimation of tree posterior probabilities using conditional clade probability distributions.
Larget, Bret
2013-07-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.
Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen
2017-12-27
Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP effects (SNP-BLUP model). When reducing marker density from WGS data to 30 K, SNP-BLUP tended to yield the highest accuracies, at least in the short term. Based on SVD of the genotype matrix, we developed a direct method for the calculation of BayesC estimates of marker effects. Although SVD- and MCMC-based marker effects differed slightly, their prediction accuracies were similar. Assuming that the SVD of the marker genotype matrix is already performed for other reasons (e.g. for SNP-BLUP), computation times for the BayesC predictions were comparable to those of SNP-BLUP.
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Topics in inference and decision-making with partial knowledge
NASA Technical Reports Server (NTRS)
Safavian, S. Rasoul; Landgrebe, David
1990-01-01
Two essential elements needed in the process of inference and decision-making are prior probabilities and likelihood functions. When both of these components are known accurately and precisely, the Bayesian approach provides a consistent and coherent solution to the problems of inference and decision-making. In many situations, however, either one or both of the above components may not be known, or at least may not be known precisely. This problem of partial knowledge about prior probabilities and likelihood functions is addressed. There are at least two ways to cope with this lack of precise knowledge: robust methods, and interval-valued methods. First, ways of modeling imprecision and indeterminacies in prior probabilities and likelihood functions are examined; then how imprecision in the above components carries over to the posterior probabilities is examined. Finally, the problem of decision making with imprecise posterior probabilities and the consequences of such actions are addressed. Application areas where the above problems may occur are in statistical pattern recognition problems, for example, the problem of classification of high-dimensional multispectral remote sensing image data.
Multiclass Posterior Probability Twin SVM for Motor Imagery EEG Classification.
She, Qingshan; Ma, Yuliang; Meng, Ming; Luo, Zhizeng
2015-01-01
Motor imagery electroencephalography is widely used in the brain-computer interface systems. Due to inherent characteristics of electroencephalography signals, accurate and real-time multiclass classification is always challenging. In order to solve this problem, a multiclass posterior probability solution for twin SVM is proposed by the ranking continuous output and pairwise coupling in this paper. First, two-class posterior probability model is constructed to approximate the posterior probability by the ranking continuous output techniques and Platt's estimating method. Secondly, a solution of multiclass probabilistic outputs for twin SVM is provided by combining every pair of class probabilities according to the method of pairwise coupling. Finally, the proposed method is compared with multiclass SVM and twin SVM via voting, and multiclass posterior probability SVM using different coupling approaches. The efficacy on the classification accuracy and time complexity of the proposed method has been demonstrated by both the UCI benchmark datasets and real world EEG data from BCI Competition IV Dataset 2a, respectively.
PARTS: Probabilistic Alignment for RNA joinT Secondary structure prediction
Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H.
2008-01-01
A novel method is presented for joint prediction of alignment and common secondary structures of two RNA sequences. The joint consideration of common secondary structures and alignment is accomplished by structural alignment over a search space defined by the newly introduced motif called matched helical regions. The matched helical region formulation generalizes previously employed constraints for structural alignment and thereby better accommodates the structural variability within RNA families. A probabilistic model based on pseudo free energies obtained from precomputed base pairing and alignment probabilities is utilized for scoring structural alignments. Maximum a posteriori (MAP) common secondary structures, sequence alignment and joint posterior probabilities of base pairing are obtained from the model via a dynamic programming algorithm called PARTS. The advantage of the more general structural alignment of PARTS is seen in secondary structure predictions for the RNase P family. For this family, the PARTS MAP predictions of secondary structures and alignment perform significantly better than prior methods that utilize a more restrictive structural alignment model. For the tRNA and 5S rRNA families, the richer structural alignment model of PARTS does not offer a benefit and the method therefore performs comparably with existing alternatives. For all RNA families studied, the posterior probability estimates obtained from PARTS offer an improvement over posterior probability estimates from a single sequence prediction. When considering the base pairings predicted over a threshold value of confidence, the combination of sensitivity and positive predictive value is superior for PARTS than for the single sequence prediction. PARTS source code is available for download under the GNU public license at http://rna.urmc.rochester.edu. PMID:18304945
Serang, Oliver; MacCoss, Michael J.; Noble, William Stafford
2010-01-01
The problem of identifying proteins from a shotgun proteomics experiment has not been definitively solved. Identifying the proteins in a sample requires ranking them, ideally with interpretable scores. In particular, “degenerate” peptides, which map to multiple proteins, have made such a ranking difficult to compute. The problem of computing posterior probabilities for the proteins, which can be interpreted as confidence in a protein’s presence, has been especially daunting. Previous approaches have either ignored the peptide degeneracy problem completely, addressed it by computing a heuristic set of proteins or heuristic posterior probabilities, or by estimating the posterior probabilities with sampling methods. We present a probabilistic model for protein identification in tandem mass spectrometry that recognizes peptide degeneracy. We then introduce graph-transforming algorithms that facilitate efficient computation of protein probabilities, even for large data sets. We evaluate our identification procedure on five different well-characterized data sets and demonstrate our ability to efficiently compute high-quality protein posteriors. PMID:20712337
Wijeysundera, Duminda N; Austin, Peter C; Hux, Janet E; Beattie, W Scott; Laupacis, Andreas
2009-01-01
Randomized trials generally use "frequentist" statistics based on P-values and 95% confidence intervals. Frequentist methods have limitations that might be overcome, in part, by Bayesian inference. To illustrate these advantages, we re-analyzed randomized trials published in four general medical journals during 2004. We used Medline to identify randomized superiority trials with two parallel arms, individual-level randomization and dichotomous or time-to-event primary outcomes. Studies with P<0.05 in favor of the intervention were deemed "positive"; otherwise, they were "negative." We used several prior distributions and exact conjugate analyses to calculate Bayesian posterior probabilities for clinically relevant effects. Of 88 included studies, 39 were positive using a frequentist analysis. Although the Bayesian posterior probabilities of any benefit (relative risk or hazard ratio<1) were high in positive studies, these probabilities were lower and variable for larger benefits. The positive studies had only moderate probabilities for exceeding the effects that were assumed for calculating the sample size. By comparison, there were moderate probabilities of any benefit in negative studies. Bayesian and frequentist analyses complement each other when interpreting the results of randomized trials. Future reports of randomized trials should include both.
NASA Astrophysics Data System (ADS)
Zhang, Feng-Liang; Ni, Yan-Chun; Au, Siu-Kui; Lam, Heung-Fai
2016-03-01
The identification of modal properties from field testing of civil engineering structures is becoming economically viable, thanks to the advent of modern sensor and data acquisition technology. Its demand is driven by innovative structural designs and increased performance requirements of dynamic-prone structures that call for a close cross-checking or monitoring of their dynamic properties and responses. Existing instrumentation capabilities and modal identification techniques allow structures to be tested under free vibration, forced vibration (known input) or ambient vibration (unknown broadband loading). These tests can be considered complementary rather than competing as they are based on different modeling assumptions in the identification model and have different implications on costs and benefits. Uncertainty arises naturally in the dynamic testing of structures due to measurement noise, sensor alignment error, modeling error, etc. This is especially relevant in field vibration tests because the test condition in the field environment can hardly be controlled. In this work, a Bayesian statistical approach is developed for modal identification using the free vibration response of structures. A frequency domain formulation is proposed that makes statistical inference based on the Fast Fourier Transform (FFT) of the data in a selected frequency band. This significantly simplifies the identification model because only the modes dominating the frequency band need to be included. It also legitimately ignores the information in the excluded frequency bands that are either irrelevant or difficult to model, thereby significantly reducing modeling error risk. The posterior probability density function (PDF) of the modal parameters is derived rigorously from modeling assumptions and Bayesian probability logic. Computational difficulties associated with calculating the posterior statistics, including the most probable value (MPV) and the posterior covariance matrix, are addressed. Fast computational algorithms for determining the MPV are proposed so that the method can be practically implemented. In the companion paper (Part II), analytical formulae are derived for the posterior covariance matrix so that it can be evaluated without resorting to finite difference method. The proposed method is verified using synthetic data. It is also applied to modal identification of full-scale field structures.
Value of Weather Information in Cranberry Marketing Decisions.
NASA Astrophysics Data System (ADS)
Morzuch, Bernard J.; Willis, Cleve E.
1982-04-01
Econometric techniques are used to establish a functional relationship between cranberry yields and important precipitation, temperature, and sunshine variables. Crop forecasts are derived from the model and are used to establish posterior probabilities to be used in a Bayesian decision context pertaining to leasing space for the storage of the berries.
Creation of the BMA ensemble for SST using a parallel processing technique
NASA Astrophysics Data System (ADS)
Kim, Kwangjin; Lee, Yang Won
2013-10-01
Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.
A Bayesian pick-the-winner design in a randomized phase II clinical trial.
Chen, Dung-Tsa; Huang, Po-Yu; Lin, Hui-Yi; Chiappori, Alberto A; Gabrilovich, Dmitry I; Haura, Eric B; Antonia, Scott J; Gray, Jhanelle E
2017-10-24
Many phase II clinical trials evaluate unique experimental drugs/combinations through multi-arm design to expedite the screening process (early termination of ineffective drugs) and to identify the most effective drug (pick the winner) to warrant a phase III trial. Various statistical approaches have been developed for the pick-the-winner design but have been criticized for lack of objective comparison among the drug agents. We developed a Bayesian pick-the-winner design by integrating a Bayesian posterior probability with Simon two-stage design in a randomized two-arm clinical trial. The Bayesian posterior probability, as the rule to pick the winner, is defined as probability of the response rate in one arm higher than in the other arm. The posterior probability aims to determine the winner when both arms pass the second stage of the Simon two-stage design. When both arms are competitive (i.e., both passing the second stage), the Bayesian posterior probability performs better to correctly identify the winner compared with the Fisher exact test in the simulation study. In comparison to a standard two-arm randomized design, the Bayesian pick-the-winner design has a higher power to determine a clear winner. In application to two studies, the approach is able to perform statistical comparison of two treatment arms and provides a winner probability (Bayesian posterior probability) to statistically justify the winning arm. We developed an integrated design that utilizes Bayesian posterior probability, Simon two-stage design, and randomization into a unique setting. It gives objective comparisons between the arms to determine the winner.
Fayed, Nicolás; Modrego, Pedro J; García-Martí, Gracián; Sanz-Requena, Roberto; Marti-Bonmatí, Luis
2017-05-01
To assess the accuracy of magnetic resonance spectroscopy (1H-MRS) and brain volumetry in mild cognitive impairment (MCI) to predict conversion to probable Alzheimer's disease (AD). Forty-eight patients fulfilling the criteria of amnestic MCI who underwent a conventional magnetic resonance imaging (MRI) followed by MRS, and T1-3D on 1.5 Tesla MR unit. At baseline the patients underwent neuropsychological examination. 1H-MRS of the brain was carried out by exploring the left medial occipital lobe and ventral posterior cingulated cortex (vPCC) using the LCModel software. A high resolution T1-3D sequence was acquired to carry out the volumetric measurement. A cortical and subcortical parcellation strategy was used to obtain the volumes of each area within the brain. The patients were followed up to detect conversion to probable AD. After a 3-year follow-up, 15 (31.2%) patients converted to AD. The myo-inositol in the occipital cortex and glutamate+glutamine (Glx) in the posterior cingulate cortex predicted conversion to probable AD at 46.1% sensitivity and 90.6% specificity. The positive predictive value was 66.7%, and the negative predictive value was 80.6%, with an overall cross-validated classification accuracy of 77.8%. The volume of the third ventricle, the total white matter and entorhinal cortex predict conversion to probable AD at 46.7% sensitivity and 90.9% specificity. The positive predictive value was 70%, and the negative predictive value was 78.9%, with an overall cross-validated classification accuracy of 77.1%. Combining volumetric measures in addition to the MRS measures the prediction to probable AD has a 38.5% sensitivity and 87.5% specificity, with a positive predictive value of 55.6%, a negative predictive value of 77.8% and an overall accuracy of 73.3%. Either MRS or brain volumetric measures are markers separately of cognitive decline and may serve as a noninvasive tool to monitor cognitive changes and progression to dementia in patients with amnestic MCI, but the results do not support the routine use in the clinical settings. Copyright © 2016 Elsevier Inc. All rights reserved.
Intervals for posttest probabilities: a comparison of 5 methods.
Mossman, D; Berger, J O
2001-01-01
Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.
Nonlinear Demodulation and Channel Coding in EBPSK Scheme
Chen, Xianqing; Wu, Lenan
2012-01-01
The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding. PMID:23213281
Nonlinear demodulation and channel coding in EBPSK scheme.
Chen, Xianqing; Wu, Lenan
2012-01-01
The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding.
Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans
Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude
2013-01-01
Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies. PMID:24302894
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
Abdul-Latiff, Muhammad Abu Bakar; Ruslin, Farhani; Fui, Vun Vui; Abu, Mohd-Hashim; Rovie-Ryan, Jeffrine Japning; Abdul-Patah, Pazil; Lakim, Maklarin; Roos, Christian; Yaakop, Salmah; Md-Zain, Badrul Munir
2014-01-01
Abstract Phylogenetic relationships among Malaysia’s long-tailed macaques have yet to be established, despite abundant genetic studies of the species worldwide. The aims of this study are to examine the phylogenetic relationships of Macaca fascicularis in Malaysia and to test its classification as a morphological subspecies. A total of 25 genetic samples of M. fascicularis yielding 383 bp of Cytochrome b (Cyt b) sequences were used in phylogenetic analysis along with one sample each of M. nemestrina and M. arctoides used as outgroups. Sequence character analysis reveals that Cyt b locus is a highly conserved region with only 23% parsimony informative character detected among ingroups. Further analysis indicates a clear separation between populations originating from different regions; the Malay Peninsula versus Borneo Insular, the East Coast versus West Coast of the Malay Peninsula, and the island versus mainland Malay Peninsula populations. Phylogenetic trees (NJ, MP and Bayesian) portray a consistent clustering paradigm as Borneo’s population was distinguished from Peninsula’s population (99% and 100% bootstrap value in NJ and MP respectively and 1.00 posterior probability in Bayesian trees). The East coast population was separated from other Peninsula populations (64% in NJ, 66% in MP and 0.53 posterior probability in Bayesian). West coast populations were divided into 2 clades: the North-South (47%/54% in NJ, 26/26% in MP and 1.00/0.80 posterior probability in Bayesian) and Island-Mainland (93% in NJ, 90% in MP and 1.00 posterior probability in Bayesian). The results confirm the previous morphological assignment of 2 subspecies, M. f. fascicularis and M. f. argentimembris, in the Malay Peninsula. These populations should be treated as separate genetic entities in order to conserve the genetic diversity of Malaysia’s M. fascicularis. These findings are crucial in aiding the conservation management and translocation process of M. fascicularis populations in Malaysia. PMID:24899832
Abdul-Latiff, Muhammad Abu Bakar; Ruslin, Farhani; Fui, Vun Vui; Abu, Mohd-Hashim; Rovie-Ryan, Jeffrine Japning; Abdul-Patah, Pazil; Lakim, Maklarin; Roos, Christian; Yaakop, Salmah; Md-Zain, Badrul Munir
2014-01-01
Phylogenetic relationships among Malaysia's long-tailed macaques have yet to be established, despite abundant genetic studies of the species worldwide. The aims of this study are to examine the phylogenetic relationships of Macaca fascicularis in Malaysia and to test its classification as a morphological subspecies. A total of 25 genetic samples of M. fascicularis yielding 383 bp of Cytochrome b (Cyt b) sequences were used in phylogenetic analysis along with one sample each of M. nemestrina and M. arctoides used as outgroups. Sequence character analysis reveals that Cyt b locus is a highly conserved region with only 23% parsimony informative character detected among ingroups. Further analysis indicates a clear separation between populations originating from different regions; the Malay Peninsula versus Borneo Insular, the East Coast versus West Coast of the Malay Peninsula, and the island versus mainland Malay Peninsula populations. Phylogenetic trees (NJ, MP and Bayesian) portray a consistent clustering paradigm as Borneo's population was distinguished from Peninsula's population (99% and 100% bootstrap value in NJ and MP respectively and 1.00 posterior probability in Bayesian trees). The East coast population was separated from other Peninsula populations (64% in NJ, 66% in MP and 0.53 posterior probability in Bayesian). West coast populations were divided into 2 clades: the North-South (47%/54% in NJ, 26/26% in MP and 1.00/0.80 posterior probability in Bayesian) and Island-Mainland (93% in NJ, 90% in MP and 1.00 posterior probability in Bayesian). The results confirm the previous morphological assignment of 2 subspecies, M. f. fascicularis and M. f. argentimembris, in the Malay Peninsula. These populations should be treated as separate genetic entities in order to conserve the genetic diversity of Malaysia's M. fascicularis. These findings are crucial in aiding the conservation management and translocation process of M. fascicularis populations in Malaysia.
A Bayesian Method for Evaluating and Discovering Disease Loci Associations
Jiang, Xia; Barmada, M. Michael; Cooper, Gregory F.; Becich, Michael J.
2011-01-01
Background A genome-wide association study (GWAS) typically involves examining representative SNPs in individuals from some population. A GWAS data set can concern a million SNPs and may soon concern billions. Researchers investigate the association of each SNP individually with a disease, and it is becoming increasingly commonplace to also analyze multi-SNP associations. Techniques for handling so many hypotheses include the Bonferroni correction and recently developed Bayesian methods. These methods can encounter problems. Most importantly, they are not applicable to a complex multi-locus hypothesis which has several competing hypotheses rather than only a null hypothesis. A method that computes the posterior probability of complex hypotheses is a pressing need. Methodology/Findings We introduce the Bayesian network posterior probability (BNPP) method which addresses the difficulties. The method represents the relationship between a disease and SNPs using a directed acyclic graph (DAG) model, and computes the likelihood of such models using a Bayesian network scoring criterion. The posterior probability of a hypothesis is computed based on the likelihoods of all competing hypotheses. The BNPP can not only be used to evaluate a hypothesis that has previously been discovered or suspected, but also to discover new disease loci associations. The results of experiments using simulated and real data sets are presented. Our results concerning simulated data sets indicate that the BNPP exhibits both better evaluation and discovery performance than does a p-value based method. For the real data sets, previous findings in the literature are confirmed and additional findings are found. Conclusions/Significance We conclude that the BNPP resolves a pressing problem by providing a way to compute the posterior probability of complex multi-locus hypotheses. A researcher can use the BNPP to determine the expected utility of investigating a hypothesis further. Furthermore, we conclude that the BNPP is a promising method for discovering disease loci associations. PMID:21853025
Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.
2015-01-01
In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
Hierarchical Bayes approach for subgroup analysis.
Hsu, Yu-Yi; Zalkikar, Jyoti; Tiwari, Ram C
2017-01-01
In clinical data analysis, both treatment effect estimation and consistency assessment are important for a better understanding of the drug efficacy for the benefit of subjects in individual subgroups. The linear mixed-effects model has been used for subgroup analysis to describe treatment differences among subgroups with great flexibility. The hierarchical Bayes approach has been applied to linear mixed-effects model to derive the posterior distributions of overall and subgroup treatment effects. In this article, we discuss the prior selection for variance components in hierarchical Bayes, estimation and decision making of the overall treatment effect, as well as consistency assessment of the treatment effects across the subgroups based on the posterior predictive p-value. Decision procedures are suggested using either the posterior probability or the Bayes factor. These decision procedures and their properties are illustrated using a simulated example with normally distributed response and repeated measurements.
The utility of Bayesian predictive probabilities for interim monitoring of clinical trials
Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn
2014-01-01
Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363
Bayesian structural inference for hidden processes.
Strelioff, Christopher C; Crutchfield, James P
2014-04-01
We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ε-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ε-machines, irrespective of estimated transition probabilities. Properties of ε-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.
Bayesian structural inference for hidden processes
NASA Astrophysics Data System (ADS)
Strelioff, Christopher C.; Crutchfield, James P.
2014-04-01
We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.
Learning about Posterior Probability: Do Diagrams and Elaborative Interrogation Help?
ERIC Educational Resources Information Center
Clinton, Virginia; Alibali, Martha Wagner; Nathan, Mitchel J.
2016-01-01
To learn from a text, students must make meaningful connections among related ideas in that text. This study examined the effectiveness of two methods of improving connections--elaborative interrogation and diagrams--in written lessons about posterior probability. Undergraduate students (N = 198) read a lesson in one of three questioning…
Skinner, Sarah
2012-11-01
Magnetic resonance imaging (MRI) is the gold standard in noninvasive investigation of knee pain. It has a very high negative predictive value and may assist in avoiding unnecessary knee arthroscopy; its accuracy in the diagnosis of meniscal and anterior cruciate ligament (ACL) tears is greater than 89%; it has a greater than 90% sensitivity for the detection of medial meniscal tears; and it is probably better at assessing the posterior horn than arthroscopy.
Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia
2017-10-01
A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.
2009-01-01
Background Marginal posterior genotype probabilities need to be computed for genetic analyses such as geneticcounseling in humans and selective breeding in animal and plant species. Methods In this paper, we describe a peeling based, deterministic, exact algorithm to compute efficiently genotype probabilities for every member of a pedigree with loops without recourse to junction-tree methods from graph theory. The efficiency in computing the likelihood by peeling comes from storing intermediate results in multidimensional tables called cutsets. Computing marginal genotype probabilities for individual i requires recomputing the likelihood for each of the possible genotypes of individual i. This can be done efficiently by storing intermediate results in two types of cutsets called anterior and posterior cutsets and reusing these intermediate results to compute the likelihood. Examples A small example is used to illustrate the theoretical concepts discussed in this paper, and marginal genotype probabilities are computed at a monogenic disease locus for every member in a real cattle pedigree. PMID:19958551
Cumulative probability of neodymium: YAG laser posterior capsulotomy after phacoemulsification.
Ando, Hiroshi; Ando, Nobuyo; Oshika, Tetsuro
2003-11-01
To retrospectively analyze the cumulative probability of neodymium:YAG (Nd:YAG) laser posterior capsulotomy after phacoemulsification and to evaluate the risk factors. Ando Eye Clinic, Kanagawa, Japan. In 3997 eyes that had phacoemulsification with an intact continuous curvilinear capsulorhexis, the cumulative probability of posterior capsulotomy was computed by Kaplan-Meier survival analysis and risk factors were analyzed using the Cox proportional hazards regression model. The variables tested were sex; age; type of cataract; preoperative best corrected visual acuity (BCVA); presence of diabetes mellitus, diabetic retinopathy, or retinitis pigmentosa; type of intraocular lens (IOL); and the year the operation was performed. The IOLs were categorized as 3-piece poly(methyl methacrylate) (PMMA), 1-piece PMMA, 3-piece silicone, and acrylic foldable. The cumulative probability of capsulotomy after cataract surgery was 1.95%, 18.50%, and 32.70% at 1, 3, and 5 years, respectively. Positive risk factors included a better preoperative BCVA (P =.0005; risk ratio [RR], 1.7; 95% confidence interval [CI], 1.3-2.5) and the presence of retinitis pigmentosa (P<.0001; RR, 6.6; 95% CI, 3.7-11.6). Women had a significantly greater probability of Nd:YAG laser posterior capsulotomy (P =.016; RR, 1.4; 95% CI, 1.1-1.8). The type of IOL was significantly related to the probability of Nd:YAG laser capsulotomy, with the foldable acrylic IOL having a significantly lower probability of capsulotomy. The 1-piece PMMA IOL had a significantly higher risk than 3-piece PMMA and 3-piece silicone IOLs. The probability of Nd:YAG laser capsulotomy was higher in women, in eyes with a better preoperative BCVA, and in patients with retinitis pigmentosa. The foldable acrylic IOL had a significantly lower probability of capsulotomy.
optBINS: Optimal Binning for histograms
NASA Astrophysics Data System (ADS)
Knuth, Kevin H.
2018-03-01
optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.
Approximate Bayesian estimation of extinction rate in the Finnish Daphnia magna metapopulation.
Robinson, John D; Hall, David W; Wares, John P
2013-05-01
Approximate Bayesian computation (ABC) is useful for parameterizing complex models in population genetics. In this study, ABC was applied to simultaneously estimate parameter values for a model of metapopulation coalescence and test two alternatives to a strict metapopulation model in the well-studied network of Daphnia magna populations in Finland. The models shared four free parameters: the subpopulation genetic diversity (θS), the rate of gene flow among patches (4Nm), the founding population size (N0) and the metapopulation extinction rate (e) but differed in the distribution of extinction rates across habitat patches in the system. The three models had either a constant extinction rate in all populations (strict metapopulation), one population that was protected from local extinction (i.e. a persistent source), or habitat-specific extinction rates drawn from a distribution with specified mean and variance. Our model selection analysis favoured the model including a persistent source population over the two alternative models. Of the closest 750,000 data sets in Euclidean space, 78% were simulated under the persistent source model (estimated posterior probability = 0.769). This fraction increased to more than 85% when only the closest 150,000 data sets were considered (estimated posterior probability = 0.774). Approximate Bayesian computation was then used to estimate parameter values that might produce the observed set of summary statistics. Our analysis provided posterior distributions for e that included the point estimate obtained from previous data from the Finnish D. magna metapopulation. Our results support the use of ABC and population genetic data for testing the strict metapopulation model and parameterizing complex models of demography. © 2013 Blackwell Publishing Ltd.
ERIC Educational Resources Information Center
Dardick, William R.; Mislevy, Robert J.
2016-01-01
A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…
Learn-as-you-go acceleration of cosmological parameter estimates
NASA Astrophysics Data System (ADS)
Aslanyan, Grigor; Easther, Richard; Price, Layne C.
2015-09-01
Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.
Learn-as-you-go acceleration of cosmological parameter estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aslanyan, Grigor; Easther, Richard; Price, Layne C., E-mail: g.aslanyan@auckland.ac.nz, E-mail: r.easther@auckland.ac.nz, E-mail: lpri691@aucklanduni.ac.nz
2015-09-01
Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitlymore » describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.« less
Neural substrates of the impaired effort expenditure decision making in schizophrenia.
Huang, Jia; Yang, Xin-Hua; Lan, Yong; Zhu, Cui-Ying; Liu, Xiao-Qun; Wang, Ye-Fei; Cheung, Eric F C; Xie, Guang-Rong; Chan, Raymond C K
2016-09-01
Unwillingness to expend more effort to pursue high value rewards has been associated with motivational anhedonia in schizophrenia (SCZ) and abnormal dopamine activity in the nucleus accumbens (NAcc). The authors hypothesized that dysfunction of the NAcc and the associated forebrain regions are involved in the impaired effort expenditure decision-making of SCZ. A 2 (reward magnitude: low vs. high) × 3 (probability: 20% vs. 50% vs. 80%) event-related fMRI design in the effort-expenditure for reward task (EEfRT) was used to examine the neural response of 23 SCZ patients and 23 demographically matched control participants when the participants made effort expenditure decisions to pursue uncertain rewards. SCZ patients were significantly less likely to expend high level of effort in the medium (50%) and high (80%) probability conditions than healthy controls. The neural response in the NAcc, the posterior cingulate gyrus and the left medial frontal gyrus in SCZ patients were weaker than healthy controls and did not linearly increase with an increase in reward magnitude and probability. Moreover, NAcc activity was positively correlated with the willingness to expend high-level effort and concrete consummatory pleasure experience. NAcc and posterior cingulate dysfunctions in SCZ patients may be involved in their impaired effort expenditure decision-making. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
To P or Not to P: Backing Bayesian Statistics.
Buchinsky, Farrel J; Chadha, Neil K
2017-12-01
In biomedical research, it is imperative to differentiate chance variation from truth before we generalize what we see in a sample of subjects to the wider population. For decades, we have relied on null hypothesis significance testing, where we calculate P values for our data to decide whether to reject a null hypothesis. This methodology is subject to substantial misinterpretation and errant conclusions. Instead of working backward by calculating the probability of our data if the null hypothesis were true, Bayesian statistics allow us instead to work forward, calculating the probability of our hypothesis given the available data. This methodology gives us a mathematical means of incorporating our "prior probabilities" from previous study data (if any) to produce new "posterior probabilities." Bayesian statistics tell us how confidently we should believe what we believe. It is time to embrace and encourage their use in our otolaryngology research.
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.
Li, Yuhong; Jia, Fucang; Qin, Jing
2016-10-01
Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.
Asking better questions: How presentation formats influence information search.
Wu, Charley M; Meder, Björn; Filimon, Flavia; Nelson, Jonathan D
2017-08-01
While the influence of presentation formats have been widely studied in Bayesian reasoning tasks, we present the first systematic investigation of how presentation formats influence information search decisions. Four experiments were conducted across different probabilistic environments, where subjects (N = 2,858) chose between 2 possible search queries, each with binary probabilistic outcomes, with the goal of maximizing classification accuracy. We studied 14 different numerical and visual formats for presenting information about the search environment, constructed across 6 design features that have been prominently related to improvements in Bayesian reasoning accuracy (natural frequencies, posteriors, complement, spatial extent, countability, and part-to-whole information). The posterior variants of the icon array and bar graph formats led to the highest proportion of correct responses, and were substantially better than the standard probability format. Results suggest that presenting information in terms of posterior probabilities and visualizing natural frequencies using spatial extent (a perceptual feature) were especially helpful in guiding search decisions, although environments with a mixture of probabilistic and certain outcomes were challenging across all formats. Subjects who made more accurate probability judgments did not perform better on the search task, suggesting that simple decision heuristics may be used to make search decisions without explicitly applying Bayesian inference to compute probabilities. We propose a new take-the-difference (TTD) heuristic that identifies the accuracy-maximizing query without explicit computation of posterior probabilities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Hepatitis disease detection using Bayesian theory
NASA Astrophysics Data System (ADS)
Maseleno, Andino; Hidayati, Rohmah Zahroh
2017-02-01
This paper presents hepatitis disease diagnosis using a Bayesian theory for better understanding of the theory. In this research, we used a Bayesian theory for detecting hepatitis disease and displaying the result of diagnosis process. Bayesian algorithm theory is rediscovered and perfected by Laplace, the basic idea is using of the known prior probability and conditional probability density parameter, based on Bayes theorem to calculate the corresponding posterior probability, and then obtained the posterior probability to infer and make decisions. Bayesian methods combine existing knowledge, prior probabilities, with additional knowledge derived from new data, the likelihood function. The initial symptoms of hepatitis which include malaise, fever and headache. The probability of hepatitis given the presence of malaise, fever, and headache. The result revealed that a Bayesian theory has successfully identified the existence of hepatitis disease.
Tentori, Katya; Chater, Nick; Crupi, Vincenzo
2016-04-01
Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than probability judgments. Impact judgments also predict the direction of errors in probability judgments. These findings suggest that human inductive reasoning relies more on estimating evidential impact than on posterior probability. Copyright © 2015 Cognitive Science Society, Inc.
Efficiency of nuclear and mitochondrial markers recovering and supporting known amniote groups.
Lambret-Frotté, Julia; Perini, Fernando Araújo; de Moraes Russo, Claudia Augusta
2012-01-01
We have analysed the efficiency of all mitochondrial protein coding genes and six nuclear markers (Adora3, Adrb2, Bdnf, Irbp, Rag2 and Vwf) in reconstructing and statistically supporting known amniote groups (murines, rodents, primates, eutherians, metatherians, therians). The efficiencies of maximum likelihood, Bayesian inference, maximum parsimony, neighbor-joining and UPGMA were also evaluated, by assessing the number of correct and incorrect recovered groupings. In addition, we have compared support values using the conservative bootstrap test and the Bayesian posterior probabilities. First, no correlation was observed between gene size and marker efficiency in recovering or supporting correct nodes. As expected, tree-building methods performed similarly, even UPGMA that, in some cases, outperformed other most extensively used methods. Bayesian posterior probabilities tend to show much higher support values than the conservative bootstrap test, for correct and incorrect nodes. Our results also suggest that nuclear markers do not necessarily show a better performance than mitochondrial genes. The so-called dependency among mitochondrial markers was not observed comparing genome performances. Finally, the amniote groups with lowest recovery rates were therians and rodents, despite the morphological support for their monophyletic status. We suggest that, regardless of the tree-building method, a few carefully selected genes are able to unfold a detailed and robust scenario of phylogenetic hypotheses, particularly if taxon sampling is increased.
The ranking probability approach and its usage in design and analysis of large-scale studies.
Kuo, Chia-Ling; Zaykin, Dmitri
2013-01-01
In experiments with many statistical tests there is need to balance type I and type II error rates while taking multiplicity into account. In the traditional approach, the nominal [Formula: see text]-level such as 0.05 is adjusted by the number of tests, [Formula: see text], i.e., as 0.05/[Formula: see text]. Assuming that some proportion of tests represent "true signals", that is, originate from a scenario where the null hypothesis is false, power depends on the number of true signals and the respective distribution of effect sizes. One way to define power is for it to be the probability of making at least one correct rejection at the assumed [Formula: see text]-level. We advocate an alternative way of establishing how "well-powered" a study is. In our approach, useful for studies with multiple tests, the ranking probability [Formula: see text] is controlled, defined as the probability of making at least [Formula: see text] correct rejections while rejecting hypotheses with [Formula: see text] smallest P-values. The two approaches are statistically related. Probability that the smallest P-value is a true signal (i.e., [Formula: see text]) is equal to the power at the level [Formula: see text], to an very good excellent approximation. Ranking probabilities are also related to the false discovery rate and to the Bayesian posterior probability of the null hypothesis. We study properties of our approach when the effect size distribution is replaced for convenience by a single "typical" value taken to be the mean of the underlying distribution. We conclude that its performance is often satisfactory under this simplification; however, substantial imprecision is to be expected when [Formula: see text] is very large and [Formula: see text] is small. Precision is largely restored when three values with the respective abundances are used instead of a single typical effect size value.
Back to Normal! Gaussianizing posterior distributions for cosmological probes
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2014-05-01
We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.
Inference of reaction rate parameters based on summary statistics from experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Inference of reaction rate parameters based on summary statistics from experiments
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...
2016-10-15
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Naive Probability: A Mental Model Theory of Extensional Reasoning.
ERIC Educational Resources Information Center
Johnson-Laird, P. N.; Legrenzi, Paolo; Girotto, Vittorio; Legrenzi, Maria Sonino; Caverni, Jean-Paul
1999-01-01
Outlines a theory of naive probability in which individuals who are unfamiliar with the probability calculus can infer the probabilities of events in an "extensional" way. The theory accommodates reasoning based on numerical premises, and explains how naive reasoners can infer posterior probabilities without relying on Bayes's theorem.…
Gaussianization for fast and accurate inference from cosmological data
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2016-06-01
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
Broekhuizen, Henk; IJzerman, Maarten J; Hauber, A Brett; Groothuis-Oudshoorn, Catharina G M
2017-03-01
The need for patient engagement has been recognized by regulatory agencies, but there is no consensus about how to operationalize this. One approach is the formal elicitation and use of patient preferences for weighing clinical outcomes. The aim of this study was to demonstrate how patient preferences can be used to weigh clinical outcomes when both preferences and clinical outcomes are uncertain by applying a probabilistic value-based multi-criteria decision analysis (MCDA) method. Probability distributions were used to model random variation and parameter uncertainty in preferences, and parameter uncertainty in clinical outcomes. The posterior value distributions and rank probabilities for each treatment were obtained using Monte-Carlo simulations. The probability of achieving the first rank is the probability that a treatment represents the highest value to patients. We illustrated our methodology for a simplified case on six HIV treatments. Preferences were modeled with normal distributions and clinical outcomes were modeled with beta distributions. The treatment value distributions showed the rank order of treatments according to patients and illustrate the remaining decision uncertainty. This study demonstrated how patient preference data can be used to weigh clinical evidence using MCDA. The model takes into account uncertainty in preferences and clinical outcomes. The model can support decision makers during the aggregation step of the MCDA process and provides a first step toward preference-based personalized medicine, yet requires further testing regarding its appropriate use in real-world settings.
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Calibration of micromechanical parameters for DEM simulations by using the particle filter
NASA Astrophysics Data System (ADS)
Cheng, Hongyang; Shuku, Takayuki; Thoeni, Klaus; Yamamoto, Haruyuki
2017-06-01
The calibration of DEM models is typically accomplished by trail and error. However, the procedure lacks of objectivity and has several uncertainties. To deal with these issues, the particle filter is employed as a novel approach to calibrate DEM models of granular soils. The posterior probability distribution of the microparameters that give numerical results in good agreement with the experimental response of a Toyoura sand specimen is approximated by independent model trajectories, referred as `particles', based on Monte Carlo sampling. The soil specimen is modeled by polydisperse packings with different numbers of spherical grains. Prepared in `stress-free' states, the packings are subjected to triaxial quasistatic loading. Given the experimental data, the posterior probability distribution is incrementally updated, until convergence is reached. The resulting `particles' with higher weights are identified as the calibration results. The evolutions of the weighted averages and posterior probability distribution of the micro-parameters are plotted to show the advantage of using a particle filter, i.e., multiple solutions are identified for each parameter with known probabilities of reproducing the experimental response.
Effect of posterior crown margin placement on gingival health.
Reitemeier, Bernd; Hänsel, Kristina; Walter, Michael H; Kastner, Christian; Toutenburg, Helge
2002-02-01
The clinical impact of posterior crown margin placement on gingival health has not been thoroughly quantified. This study evaluated the effect of posterior crown margin placement with multivariate analysis. Ten general dentists reviewed 240 patients with 480 metal-ceramic crowns in a prospective clinical trial. The alloy was randomly selected from 2 high gold, 1 low gold, and 1 palladium alloy. Variables were the alloy used, oral hygiene index score before treatment, location of crown margins at baseline, and plaque index and sulcus bleeding index scores recorded for restored and control teeth after 1 year. The effect of crown margin placement on sulcular bleeding and plaque accumulation was analyzed with regression models (P<.05). The probability of plaque at 1 year increased with increasing oral hygiene index score before treatment. The lingual surfaces demonstrated the highest probability of plaque. The risk of bleeding at intrasulcular posterior crown margins was approximately twice that at supragingival margins. Poor oral hygiene before treatment and plaque also were associated with sulcular bleeding. Facial sites exhibited a lower probability of sulcular bleeding than lingual surfaces. Type of alloy did not influence sulcular bleeding. In this study, placement of crown margins was one of several parameters that affected gingival health.
Pig Data and Bayesian Inference on Multinomial Probabilities
ERIC Educational Resources Information Center
Kern, John C.
2006-01-01
Bayesian inference on multinomial probabilities is conducted based on data collected from the game Pass the Pigs[R]. Prior information on these probabilities is readily available from the instruction manual, and is easily incorporated in a Dirichlet prior. Posterior analysis of the scoring probabilities quantifies the discrepancy between empirical…
Danaei, Goodarz; Finucane, Mariel M; Lin, John K; Singh, Gitanjali M; Paciorek, Christopher J; Cowan, Melanie J; Farzadfar, Farshad; Stevens, Gretchen A; Lim, Stephen S; Riley, Leanne M; Ezzati, Majid
2011-02-12
Data for trends in blood pressure are needed to understand the effects of its dietary, lifestyle, and pharmacological determinants; set intervention priorities; and evaluate national programmes. However, few worldwide analyses of trends in blood pressure have been done. We estimated worldwide trends in population mean systolic blood pressure (SBP). We estimated trends and their uncertainties in mean SBP for adults 25 years and older in 199 countries and territories. We obtained data from published and unpublished health examination surveys and epidemiological studies (786 country-years and 5·4 million participants). For each sex, we used a Bayesian hierarchical model to estimate mean SBP by age, country, and year, accounting for whether a study was nationally representative. In 2008, age-standardised mean SBP worldwide was 128·1 mm Hg (95% uncertainty interval 126·7-129·4) in men and 124·4 mm Hg (123·0-125·9) in women. Globally, between 1980 and 2008, SBP decreased by 0·8 mm Hg per decade (-0·4 to 2·2, posterior probability of being a true decline=0·90) in men and 1·0 mm Hg per decade (-0·3 to 2·3, posterior probability=0·93) in women. Female SBP decreased by 3·5 mm Hg or more per decade in western Europe and Australasia (posterior probabilities ≥0·999). Male SBP fell most in high-income North America, by 2·8 mm Hg per decade (1·3-4·5, posterior probability >0·999), followed by Australasia and western Europe where it decreased by more than 2·0 mm Hg per decade (posterior probabilities >0·98). SBP rose in Oceania, east Africa, and south and southeast Asia for both sexes, and in west Africa for women, with the increases ranging 0·8-1·6 mm Hg per decade in men (posterior probabilities 0·72-0·91) and 1·0-2·7 mm Hg per decade for women (posterior probabilities 0·75-0·98). Female SBP was highest in some east and west African countries, with means of 135 mm Hg or greater. Male SBP was highest in Baltic and east and west African countries, where mean SBP reached 138 mm Hg or more. Men and women in western Europe had the highest SBP in high-income regions. On average, global population SBP decreased slightly since 1980, but trends varied significantly across regions and countries. SBP is currently highest in low-income and middle-income countries. Effective population-based and personal interventions should be targeted towards low-income and middle-income countries. Funding Bill & Melinda Gates Foundation and WHO. Copyright © 2011 Elsevier Ltd. All rights reserved.
Chang, Edward F; Breshears, Jonathan D; Raygor, Kunal P; Lau, Darryl; Molinaro, Annette M; Berger, Mitchel S
2017-01-01
OBJECTIVE Functional mapping using direct cortical stimulation is the gold standard for the prevention of postoperative morbidity during resective surgery in dominant-hemisphere perisylvian regions. Its role is necessitated by the significant interindividual variability that has been observed for essential language sites. The aim in this study was to determine the statistical probability distribution of eliciting aphasic errors for any given stereotactically based cortical position in a patient cohort and to quantify the variability at each cortical site. METHODS Patients undergoing awake craniotomy for dominant-hemisphere primary brain tumor resection between 1999 and 2014 at the authors' institution were included in this study, which included counting and picture-naming tasks during dense speech mapping via cortical stimulation. Positive and negative stimulation sites were collected using an intraoperative frameless stereotactic neuronavigation system and were converted to Montreal Neurological Institute coordinates. Data were iteratively resampled to create mean and standard deviation probability maps for speech arrest and anomia. Patients were divided into groups with a "classic" or an "atypical" location of speech function, based on the resultant probability maps. Patient and clinical factors were then assessed for their association with an atypical location of speech sites by univariate and multivariate analysis. RESULTS Across 102 patients undergoing speech mapping, the overall probabilities of speech arrest and anomia were 0.51 and 0.33, respectively. Speech arrest was most likely to occur with stimulation of the posterior inferior frontal gyrus (maximum probability from individual bin = 0.025), and variance was highest in the dorsal premotor cortex and the posterior superior temporal gyrus. In contrast, stimulation within the posterior perisylvian cortex resulted in the maximum mean probability of anomia (maximum probability = 0.012), with large variance in the regions surrounding the posterior superior temporal gyrus, including the posterior middle temporal, angular, and supramarginal gyri. Patients with atypical speech localization were far more likely to have tumors in canonical Broca's or Wernicke's areas (OR 7.21, 95% CI 1.67-31.09, p < 0.01) or to have multilobar tumors (OR 12.58, 95% CI 2.22-71.42, p < 0.01), than were patients with classic speech localization. CONCLUSIONS This study provides statistical probability distribution maps for aphasic errors during cortical stimulation mapping in a patient cohort. Thus, the authors provide an expected probability of inducing speech arrest and anomia from specific 10-mm 2 cortical bins in an individual patient. In addition, they highlight key regions of interindividual mapping variability that should be considered preoperatively. They believe these results will aid surgeons in their preoperative planning of eloquent cortex resection.
ERIC Educational Resources Information Center
Satake, Eiki; Amato, Philip P.
2008-01-01
This paper presents an alternative version of formulas of conditional probabilities and Bayes' rule that demonstrate how the truth table of elementary mathematical logic applies to the derivations of the conditional probabilities of various complex, compound statements. This new approach is used to calculate the prior and posterior probabilities…
A computer program for estimation from incomplete multinomial data
NASA Technical Reports Server (NTRS)
Credeur, K. R.
1978-01-01
Coding is given for maximum likelihood and Bayesian estimation of the vector p of multinomial cell probabilities from incomplete data. Also included is coding to calculate and approximate elements of the posterior mean and covariance matrices. The program is written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system (NOS) 1.1. The program requires approximately 44000 octal locations of core storage. A typical case requires from 72 seconds to 92 seconds on CYBER 175 depending on the value of the prior parameter.
Neural Signatures of Intransitive Preferences
Kalenscher, Tobias; Tobler, Philippe N.; Huijbers, Willem; Daselaar, Sander M.; Pennartz, Cyriel M.A.
2010-01-01
It is often assumed that decisions are made by rank-ordering and thus comparing the available choice options based on their subjective values. Rank-ordering requires that the alternatives’ subjective values are mentally represented at least on an ordinal scale. Because one alternative cannot be at the same time better and worse than another alternative, choices should satisfy transitivity (if alternative A is preferred over B, and B is preferred over C, A should be preferred over C). Yet, individuals often demonstrate striking violations of transitivity (preferring C over A). We used functional magnetic resonance imaging to study the neural correlates of intransitive choices between gambles varying in magnitude and probability of financial gains. Behavioral intransitivities were common. They occurred because participants did not evaluate the gambles independently, but in comparison with the alternative gamble presented. Neural value signals in prefrontal and parietal cortex were not ordinal-scaled and transitive, but reflected fluctuations in the gambles’ local, pairing-dependent preference-ranks. Detailed behavioral analysis of gamble preferences showed that, depending on the difference in the offered gambles’ attributes, participants gave variable priority to magnitude or probability and thus shifted between preferring richer or safer gambles. The variable, context-dependent priority given to magnitude and probability was tracked by insula (magnitude) and posterior cingulate (probability). Their activation-balance may reflect the individual decision rules leading to intransitivities. Thus, the phenomenon of intransitivity is reflected in the organization of the neural systems involved in risky decision-making. PMID:20814565
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
2016-01-01
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
Bayesian approach to inverse statistical mechanics.
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Bayesian approach to inverse statistical mechanics
NASA Astrophysics Data System (ADS)
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
NASA Astrophysics Data System (ADS)
Al-Mudhafar, W. J.
2013-12-01
Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance. In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability. Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values. The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product. Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly drawing datasets with replacement from the training data. Each sample has the same size of the original training set and it can be conducted N times to produce N bootstrap datasets to re-fit the model accordingly to decrease the squared difference between the estimated and observed categorical variables (facies) leading to decrease the degree of uncertainty.
Screening for SNPs with Allele-Specific Methylation based on Next-Generation Sequencing Data.
Hu, Bo; Ji, Yuan; Xu, Yaomin; Ting, Angela H
2013-05-01
Allele-specific methylation (ASM) has long been studied but mainly documented in the context of genomic imprinting and X chromosome inactivation. Taking advantage of the next-generation sequencing technology, we conduct a high-throughput sequencing experiment with four prostate cell lines to survey the whole genome and identify single nucleotide polymorphisms (SNPs) with ASM. A Bayesian approach is proposed to model the counts of short reads for each SNP conditional on its genotypes of multiple subjects, leading to a posterior probability of ASM. We flag SNPs with high posterior probabilities of ASM by accounting for multiple comparisons based on posterior false discovery rates. Applying the Bayesian approach to the in-house prostate cell line data, we identify 269 SNPs as candidates of ASM. A simulation study is carried out to demonstrate the quantitative performance of the proposed approach.
Inference of emission rates from multiple sources using Bayesian probability theory.
Yee, Eugene; Flesch, Thomas K
2010-03-01
The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.
Shephard, R W; Morton, J M
2018-01-01
To determine the sensitivity (Se) and specificity (Sp) of pregnancy diagnosis using transrectal ultrasonography and an ELISA for pregnancy-associated glycoprotein (PAG) in milk, in lactating dairy cows in seasonally calving herds approximately 85-100 days after the start of the herd's breeding period. Paired results were used from pregnancy diagnosis using transrectal ultrasonography and ELISA for PAG in milk carried out approximately 85 and 100 days after the start of the breeding period, respectively, from 879 cows from four herds in Victoria, Australia. A Bayesian latent class model was used to estimate the proportion of cows pregnant, the Se and Sp of each test, and covariances between test results in pregnant and non-pregnant cows. Prior probability estimates were defined using beta distributions for the expected proportion of cows pregnant, Se and Sp for each test, and covariances between tests. Markov Chain Monte Carlo iterations identified posterior distributions for each of the unknown variables. Posterior distributions for each parameter were described using medians and 95% probability (i.e. credible) intervals (PrI). The posterior median estimates for Se and Sp for each test were used to estimate positive predictive and negative predictive values across a range of pregnancy proportions. The estimate for proportion pregnant was 0.524 (95% PrI = 0.485-0.562). For pregnancy diagnosis using transrectal ultrasonography, Se and Sp were 0.939 (95% PrI = 0.890-0.974) and 0.943 (95% PrI = 0.885-0.984), respectively; for ELISA, Se and Sp were 0.963 (95% PrI = 0.919-0.990) and 0.870 (95% PrI = 0.806-0.931), respectively. The estimated covariance between test results was 0.033 (95% PrI = 0.008-0.046) and 0.035 (95% PrI = 0.018-0.078) for pregnant and non-pregnant cows, respectively. Pregnancy diagnosis results using transrectal ultrasonography had a higher positive predictive value but lower negative predictive value than results from the ELISA across the range of pregnancy proportions assessed. Pregnancy diagnosis using transrectal ultrasonography and ELISA for PAG in milk had similar Se but differed in predictive values. Pregnancy diagnosis in seasonally calving herds around 85-100 days after the start of the breeding period using the ELISA is expected to result in a higher negative predictive value but lower positive predictive value than pregnancy diagnosis using transrectal ultrasonography. Thus, with the ELISA, a higher proportion of the cows with negative results will be non-pregnant, relative to results from transrectal ultrasonography, but a lower proportion of cows with positive results will be pregnant.
Smith, Bruce W; Mitchell, Derek G V; Hardin, Michael G; Jazbec, Sandra; Fridberg, Daniel; Blair, R James R; Ernst, Monique
2009-01-15
Economic decision-making involves the weighting of magnitude and probability of potential gains/losses. While previous work has examined the neural systems involved in decision-making, there is a need to understand how the parameters associated with decision-making (e.g., magnitude of expected reward, probability of expected reward and risk) modulate activation within these neural systems. In the current fMRI study, we modified the monetary wheel of fortune (WOF) task [Ernst, M., Nelson, E.E., McClure, E.B., Monk, C.S., Munson, S., Eshel, N., et al. (2004). Choice selection and reward anticipation: an fMRI study. Neuropsychologia 42(12), 1585-1597.] to examine in 25 healthy young adults the neural responses to selections of different reward magnitudes, probabilities, or risks. Selection of high, relative to low, reward magnitude increased activity in insula, amygdala, middle and posterior cingulate cortex, and basal ganglia. Selection of low-probability, as opposed to high-probability reward, increased activity in anterior cingulate cortex, as did selection of risky, relative to safe reward. In summary, decision-making that did not involve conflict, as in the magnitude contrast, recruited structures known to support the coding of reward values, and those that integrate motivational and perceptual information for behavioral responses. In contrast, decision-making under conflict, as in the probability and risk contrasts, engaged the dorsal anterior cingulate cortex whose role in conflict monitoring is well established. However, decision-making under conflict failed to activate the structures that track reward values per se. Thus, the presence of conflict in decision-making seemed to significantly alter the pattern of neural responses to simple rewards. In addition, this paradigm further clarifies the functional specialization of the cingulate cortex in processes of decision-making.
Structural Information from Single-molecule FRET Experiments Using the Fast Nano-positioning System
Röcker, Carlheinz; Nagy, Julia; Michaelis, Jens
2017-01-01
Single-molecule Förster Resonance Energy Transfer (smFRET) can be used to obtain structural information on biomolecular complexes in real-time. Thereby, multiple smFRET measurements are used to localize an unknown dye position inside a protein complex by means of trilateration. In order to obtain quantitative information, the Nano-Positioning System (NPS) uses probabilistic data analysis to combine structural information from X-ray crystallography with single-molecule fluorescence data to calculate not only the most probable position but the complete three-dimensional probability distribution, termed posterior, which indicates the experimental uncertainty. The concept was generalized for the analysis of smFRET networks containing numerous dye molecules. The latest version of NPS, Fast-NPS, features a new algorithm using Bayesian parameter estimation based on Markov Chain Monte Carlo sampling and parallel tempering that allows for the analysis of large smFRET networks in a comparably short time. Moreover, Fast-NPS allows the calculation of the posterior by choosing one of five different models for each dye, that account for the different spatial and orientational behavior exhibited by the dye molecules due to their local environment. Here we present a detailed protocol for obtaining smFRET data and applying the Fast-NPS. We provide detailed instructions for the acquisition of the three input parameters of Fast-NPS: the smFRET values, as well as the quantum yield and anisotropy of the dye molecules. Recently, the NPS has been used to elucidate the architecture of an archaeal open promotor complex. This data is used to demonstrate the influence of the five different dye models on the posterior distribution. PMID:28287526
Structural Information from Single-molecule FRET Experiments Using the Fast Nano-positioning System.
Dörfler, Thilo; Eilert, Tobias; Röcker, Carlheinz; Nagy, Julia; Michaelis, Jens
2017-02-09
Single-molecule Förster Resonance Energy Transfer (smFRET) can be used to obtain structural information on biomolecular complexes in real-time. Thereby, multiple smFRET measurements are used to localize an unknown dye position inside a protein complex by means of trilateration. In order to obtain quantitative information, the Nano-Positioning System (NPS) uses probabilistic data analysis to combine structural information from X-ray crystallography with single-molecule fluorescence data to calculate not only the most probable position but the complete three-dimensional probability distribution, termed posterior, which indicates the experimental uncertainty. The concept was generalized for the analysis of smFRET networks containing numerous dye molecules. The latest version of NPS, Fast-NPS, features a new algorithm using Bayesian parameter estimation based on Markov Chain Monte Carlo sampling and parallel tempering that allows for the analysis of large smFRET networks in a comparably short time. Moreover, Fast-NPS allows the calculation of the posterior by choosing one of five different models for each dye, that account for the different spatial and orientational behavior exhibited by the dye molecules due to their local environment. Here we present a detailed protocol for obtaining smFRET data and applying the Fast-NPS. We provide detailed instructions for the acquisition of the three input parameters of Fast-NPS: the smFRET values, as well as the quantum yield and anisotropy of the dye molecules. Recently, the NPS has been used to elucidate the architecture of an archaeal open promotor complex. This data is used to demonstrate the influence of the five different dye models on the posterior distribution.
An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems
Dawson, Kevin J.; Belkhir, Khalid
2009-01-01
Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306
Al-Mezaine, Hani S
2010-01-01
We report a 55-year-old man with unusually dense, unilateral central posterior capsule pigmentation associated with the characteristic clinical features of pigment dispersion syndrome, including a Krukenberg's spindle and dense trabecular pigmentation in both eyes. A history of an old blunt ocular trauma probably caused separation of the anterior hyaloid from the back of the lens, thereby creating an avenue by which pigment could reach the potential space of Berger's from the posterior chamber.
Al-Mezaine, Hani S
2010-01-01
We report a 55-year-old man with unusually dense, unilateral central posterior capsule pigmentation associated with the characteristic clinical features of pigment dispersion syndrome, including a Krukenberg's spindle and dense trabecular pigmentation in both eyes. A history of an old blunt ocular trauma probably caused separation of the anterior hyaloid from the back of the lens, thereby creating an avenue by which pigment could reach the potential space of Berger's from the posterior chamber. PMID:20534930
Screening for SNPs with Allele-Specific Methylation based on Next-Generation Sequencing Data
Hu, Bo; Xu, Yaomin
2013-01-01
Allele-specific methylation (ASM) has long been studied but mainly documented in the context of genomic imprinting and X chromosome inactivation. Taking advantage of the next-generation sequencing technology, we conduct a high-throughput sequencing experiment with four prostate cell lines to survey the whole genome and identify single nucleotide polymorphisms (SNPs) with ASM. A Bayesian approach is proposed to model the counts of short reads for each SNP conditional on its genotypes of multiple subjects, leading to a posterior probability of ASM. We flag SNPs with high posterior probabilities of ASM by accounting for multiple comparisons based on posterior false discovery rates. Applying the Bayesian approach to the in-house prostate cell line data, we identify 269 SNPs as candidates of ASM. A simulation study is carried out to demonstrate the quantitative performance of the proposed approach. PMID:23710259
Al-Shayyab, Mohammad H
2017-01-01
The aim of this study was to evaluate the efficacy of, and patients' subjective responses to, periodontal ligament (PDL) anesthetic injection compared to traditional local-anesthetic infiltration injection for the nonsurgical extraction of one posterior maxillary permanent tooth. All patients scheduled for nonsurgical symmetrical maxillary posterior permanent tooth extraction in the Department of Oral and Maxillofacial Surgery at the University of Jordan Hospital, Amman, Jordan over a 7-month period were invited to participate in this prospective randomized double-blinded split-mouth study. Every patient received the recommended volume of 2% lidocaine with 1:100,000 epinephrine for PDL injection on the experimental side and for local infiltration on the control side. A visual analog scale (VAS) and verbal rating scale (VRS) were used to describe pain felt during injection and extraction, respectively. Statistical significance was based on probability values <0.05 and measured using χ 2 and Student t -tests and nonparametric Mann-Whitney and Kruskal-Wallis tests. Of the 73 patients eligible for this study, 55 met the inclusion criteria: 32 males and 23 females, with a mean age of 34.87±14.93 years. Differences in VAS scores and VRS data between the two techniques were statistically significant ( P <0.001) and in favor of the infiltration injection. The PDL injection may not be the alternative anesthetic technique of choice to routine local infiltration for the nonsurgical extraction of one posterior maxillary permanent tooth.
Harmouche, Rola; Subbanna, Nagesh K; Collins, D Louis; Arnold, Douglas L; Arbel, Tal
2015-05-01
In this paper, a fully automatic probabilistic method for multiple sclerosis (MS) lesion classification is presented, whereby the posterior probability density function over healthy tissues and two types of lesions (T1-hypointense and T2-hyperintense) is generated at every voxel. During training, the system explicitly models the spatial variability of the intensity distributions throughout the brain by first segmenting it into distinct anatomical regions and then building regional likelihood distributions for each tissue class based on multimodal magnetic resonance image (MRI) intensities. Local class smoothness is ensured by incorporating neighboring voxel information in the prior probability through Markov random fields. The system is tested on two datasets from real multisite clinical trials consisting of multimodal MRIs from a total of 100 patients with MS. Lesion classification results based on the framework are compared with and without the regional information, as well as with other state-of-the-art methods against the labels from expert manual raters. The metrics for comparison include Dice overlap, sensitivity, and positive predictive rates for both voxel and lesion classifications. Statistically significant improvements in Dice values ( ), for voxel-based and lesion-based sensitivity values ( ), and positive predictive rates ( and respectively) are shown when the proposed method is compared to the method without regional information, and to a widely used method [1]. This holds particularly true in the posterior fossa, an area where classification is very challenging. The proposed method allows us to provide clinicians with accurate tissue labels for T1-hypointense and T2-hyperintense lesions, two types of lesions that differ in appearance and clinical ramifications, and with a confidence level in the classification, which helps clinicians assess the classification results.
Graphical methods for the sensitivity analysis in discriminant analysis
Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang
2015-09-30
Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less
Yang, Ziheng; Zhu, Tianqi
2018-02-20
The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.
A Bayesian Approach to Evaluating Consistency between Climate Model Output and Observations
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Cressie, N.; Teixeira, J.
2010-12-01
Like other scientific and engineering problems that involve physical modeling of complex systems, climate models can be evaluated and diagnosed by comparing their output to observations of similar quantities. Though the global remote sensing data record is relatively short by climate research standards, these data offer opportunities to evaluate model predictions in new ways. For example, remote sensing data are spatially and temporally dense enough to provide distributional information that goes beyond simple moments to allow quantification of temporal and spatial dependence structures. In this talk, we propose a new method for exploiting these rich data sets using a Bayesian paradigm. For a collection of climate models, we calculate posterior probabilities its members best represent the physical system each seeks to reproduce. The posterior probability is based on the likelihood that a chosen summary statistic, computed from observations, would be obtained when the model's output is considered as a realization from a stochastic process. By exploring how posterior probabilities change with different statistics, we may paint a more quantitative and complete picture of the strengths and weaknesses of the models relative to the observations. We demonstrate our method using model output from the CMIP archive, and observations from NASA's Atmospheric Infrared Sounder.
Bae, Ji-Hoon; Paik, Nak Hwan; Park, Gyu-Won; Yoon, Jung-Ro; Chae, Dong-Ju; Kwon, Jae Ho; Kim, Jong In; Nha, Kyung-Wook
2013-03-01
The purpose of this study was to determine the accuracy, sensitivity, specificity, and predictive values of a single event of painful popping in the presence of a posterior root tear of the medial meniscus in middle-aged to older Asian patients. We conducted a retrospective review of medical records of 936 patients who underwent arthroscopic surgeries for an isolated medial meniscus tear between January 2000 and December 2010. There were 332 men and 604 women with a mean age of 41 years (range, 25 to 66 years). The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of a painful popping sensation for a posterior root tear of the medial meniscus were calculated. Arthroscopy confirmed the presence of posterior root tears of the medial menisci in 237 of 936 patients (25.3%). A single event of a painful popping sensation was present in 86 of these 936 patients (9.1%). Of these 86 patients with a painful popping sensation, 83 (96.5%) were categorized as having an isolated posterior root tear of the medial meniscus. The positive predictive value of a painful popping sensation in identifying a posterior root tear of the medial meniscus was 96.5%, the negative predictive value was 81.8%, the sensitivity was 35.0%, the specificity was 99.5%, and the diagnostic accuracy was 77.9%. A single event of painful popping can be a highly predictive clinical sign of a posterior root tear of the medial meniscus in the middle-aged to older Asian population. However, it has low sensitivity for the detection of a posterior root tear of the medial meniscus. Level IV, therapeutic case series. Copyright © 2013 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.
2012-12-01
We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.
NASA Astrophysics Data System (ADS)
Maymandi, Nahal; Kerachian, Reza; Nikoo, Mohammad Reza
2018-03-01
This paper presents a new methodology for optimizing Water Quality Monitoring (WQM) networks of reservoirs and lakes using the concept of the value of information (VOI) and utilizing results of a calibrated numerical water quality simulation model. With reference to the value of information theory, water quality of every checkpoint with a specific prior probability differs in time. After analyzing water quality samples taken from potential monitoring points, the posterior probabilities are updated using the Baye's theorem, and VOI of the samples is calculated. In the next step, the stations with maximum VOI is selected as optimal stations. This process is repeated for each sampling interval to obtain optimal monitoring network locations for each interval. The results of the proposed VOI-based methodology is compared with those obtained using an entropy theoretic approach. As the results of the two methodologies would be partially different, in the next step, the results are combined using a weighting method. Finally, the optimal sampling interval and location of WQM stations are chosen using the Evidential Reasoning (ER) decision making method. The efficiency and applicability of the methodology are evaluated using available water quantity and quality data of the Karkheh Reservoir in the southwestern part of Iran.
Statistical Inference in Hidden Markov Models Using k-Segment Constraints
Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher
2016-01-01
Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674
Elastic K-means using posterior probability.
Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris
2017-01-01
The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model.
Fancher, Chris M.; Han, Zhen; Levin, Igor; Page, Katharine; Reich, Brian J.; Smith, Ralph C.; Wilson, Alyson G.; Jones, Jacob L.
2016-01-01
A Bayesian inference method for refining crystallographic structures is presented. The distribution of model parameters is stochastically sampled using Markov chain Monte Carlo. Posterior probability distributions are constructed for all model parameters to properly quantify uncertainty by appropriately modeling the heteroskedasticity and correlation of the error structure. The proposed method is demonstrated by analyzing a National Institute of Standards and Technology silicon standard reference material. The results obtained by Bayesian inference are compared with those determined by Rietveld refinement. Posterior probability distributions of model parameters provide both estimates and uncertainties. The new method better estimates the true uncertainties in the model as compared to the Rietveld method. PMID:27550221
Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian
2017-03-01
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
Al-Shayyab, Mohammad H
2017-01-01
Aim The aim of this study was to evaluate the efficacy of, and patients’ subjective responses to, periodontal ligament (PDL) anesthetic injection compared to traditional local-anesthetic infiltration injection for the nonsurgical extraction of one posterior maxillary permanent tooth. Materials and methods All patients scheduled for nonsurgical symmetrical maxillary posterior permanent tooth extraction in the Department of Oral and Maxillofacial Surgery at the University of Jordan Hospital, Amman, Jordan over a 7-month period were invited to participate in this prospective randomized double-blinded split-mouth study. Every patient received the recommended volume of 2% lidocaine with 1:100,000 epinephrine for PDL injection on the experimental side and for local infiltration on the control side. A visual analog scale (VAS) and verbal rating scale (VRS) were used to describe pain felt during injection and extraction, respectively. Statistical significance was based on probability values <0.05 and measured using χ2 and Student t-tests and nonparametric Mann–Whitney and Kruskal–Wallis tests. Results Of the 73 patients eligible for this study, 55 met the inclusion criteria: 32 males and 23 females, with a mean age of 34.87±14.93 years. Differences in VAS scores and VRS data between the two techniques were statistically significant (P<0.001) and in favor of the infiltration injection. Conclusion The PDL injection may not be the alternative anesthetic technique of choice to routine local infiltration for the nonsurgical extraction of one posterior maxillary permanent tooth. PMID:29070950
Assessment of accident severity in the construction industry using the Bayesian theorem.
Alizadeh, Seyed Shamseddin; Mortazavi, Seyed Bagher; Mehdi Sepehri, Mohammad
2015-01-01
Construction is a major source of employment in many countries. In construction, workers perform a great diversity of activities, each one with a specific associated risk. The aim of this paper is to identify workers who are at risk of accidents with severe consequences and classify these workers to determine appropriate control measures. We defined 48 groups of workers and used the Bayesian theorem to estimate posterior probabilities about the severity of accidents at the level of individuals in construction sector. First, the posterior probabilities of injuries based on four variables were provided. Then the probabilities of injury for 48 groups of workers were determined. With regard to marginal frequency of injury, slight injury (0.856), fatal injury (0.086) and severe injury (0.058) had the highest probability of occurrence. It was observed that workers with <1 year's work experience (0.168) had the highest probability of injury occurrence. The first group of workers, who were extensively exposed to risk of severe and fatal accidents, involved workers ≥ 50 years old, married, with 1-5 years' work experience, who had no past accident experience. The findings provide a direction for more effective safety strategies and occupational accident prevention and emergency programmes.
Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B
2003-11-01
The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.
Musella, Vincenzo; Rinaldi, Laura; Lagazio, Corrado; Cringoli, Giuseppe; Biggeri, Annibale; Catelan, Dolores
2014-09-15
Model-based geostatistics and Bayesian approaches are appropriate in the context of Veterinary Epidemiology when point data have been collected by valid study designs. The aim is to predict a continuous infection risk surface. Little work has been done on the use of predictive infection probabilities at farm unit level. In this paper we show how to use predictive infection probability and related uncertainty from a Bayesian kriging model to draw a informative samples from the 8794 geo-referenced sheep farms of the Campania region (southern Italy). Parasitological data come from a first cross-sectional survey carried out to study the spatial distribution of selected helminths in sheep farms. A grid sampling was performed to select the farms for coprological examinations. Faecal samples were collected for 121 sheep farms and the presence of 21 different helminths were investigated using the FLOTAC technique. The 21 responses are very different in terms of geographical distribution and prevalence of infection. The observed prevalence range is from 0.83% to 96.69%. The distributions of the posterior predictive probabilities for all the 21 parasites are very heterogeneous. We show how the results of the Bayesian kriging model can be used to plan a second wave survey. Several alternatives can be chosen depending on the purposes of the second survey: weight by posterior predictive probabilities, their uncertainty or combining both information. The proposed Bayesian kriging model is simple, and the proposed samping strategy represents a useful tool to address targeted infection control treatments and surbveillance campaigns. It is easily extendable to other fields of research. Copyright © 2014 Elsevier B.V. All rights reserved.
Hydrologic Model Selection using Markov chain Monte Carlo methods
NASA Astrophysics Data System (ADS)
Marshall, L.; Sharma, A.; Nott, D.
2002-12-01
Estimation of parameter uncertainty (and in turn model uncertainty) allows assessment of the risk in likely applications of hydrological models. Bayesian statistical inference provides an ideal means of assessing parameter uncertainty whereby prior knowledge about the parameter is combined with information from the available data to produce a probability distribution (the posterior distribution) that describes uncertainty about the parameter and serves as a basis for selecting appropriate values for use in modelling applications. Widespread use of Bayesian techniques in hydrology has been hindered by difficulties in summarizing and exploring the posterior distribution. These difficulties have been largely overcome by recent advances in Markov chain Monte Carlo (MCMC) methods that involve random sampling of the posterior distribution. This study presents an adaptive MCMC sampling algorithm which has characteristics that are well suited to model parameters with a high degree of correlation and interdependence, as is often evident in hydrological models. The MCMC sampling technique is used to compare six alternative configurations of a commonly used conceptual rainfall-runoff model, the Australian Water Balance Model (AWBM), using 11 years of daily rainfall runoff data from the Bass river catchment in Australia. The alternative configurations considered fall into two classes - those that consider model errors to be independent of prior values, and those that model the errors as an autoregressive process. Each such class consists of three formulations that represent increasing levels of complexity (and parameterisation) of the original model structure. The results from this study point both to the importance of using Bayesian approaches in evaluating model performance, as well as the simplicity of the MCMC sampling framework that has the ability to bring such approaches within the reach of the applied hydrological community.
NASA Astrophysics Data System (ADS)
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR)
NASA Astrophysics Data System (ADS)
Peters, Christina; Malz, Alex; Hlozek, Renée
2018-01-01
The Bayesian Estimation Applied to Multiple Species (BEAMS) framework employs probabilistic supernova type classifications to do photometric SN cosmology. This work extends BEAMS to replace high-confidence spectroscopic redshifts with photometric redshift probability density functions, a capability that will be essential in the era the Large Synoptic Survey Telescope and other next-generation photometric surveys where it will not be possible to perform spectroscopic follow up on every SN. We present the Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR) Bayesian hierarchical model for constraining the cosmological parameters from photometric lightcurves and host galaxy photometry, which includes selection effects and is extensible to uncertainty in the redshift-dependent supernova type proportions. We create a pair of realistic mock catalogs of joint posteriors over supernova type, redshift, and distance modulus informed by photometric supernova lightcurves and over redshift from simulated host galaxy photometry. We perform inference under our model to obtain a joint posterior probability distribution over the cosmological parameters and compare our results with other methods, namely: a spectroscopic subset, a subset of high probability photometrically classified supernovae, and reducing the photometric redshift probability to a single measurement and error bar.
A Bayesian approach to the modelling of α Cen A
NASA Astrophysics Data System (ADS)
Bazot, M.; Bourguignon, S.; Christensen-Dalsgaard, J.
2012-12-01
Determining the physical characteristics of a star is an inverse problem consisting of estimating the parameters of models for the stellar structure and evolution, and knowing certain observable quantities. We use a Bayesian approach to solve this problem for α Cen A, which allows us to incorporate prior information on the parameters to be estimated, in order to better constrain the problem. Our strategy is based on the use of a Markov chain Monte Carlo (MCMC) algorithm to estimate the posterior probability densities of the stellar parameters: mass, age, initial chemical composition, etc. We use the stellar evolutionary code ASTEC to model the star. To constrain this model both seismic and non-seismic observations were considered. Several different strategies were tested to fit these values, using either two free parameters or five free parameters in ASTEC. We are thus able to show evidence that MCMC methods become efficient with respect to more classical grid-based strategies when the number of parameters increases. The results of our MCMC algorithm allow us to derive estimates for the stellar parameters and robust uncertainties thanks to the statistical analysis of the posterior probability densities. We are also able to compute odds for the presence of a convective core in α Cen A. When using core-sensitive seismic observational constraints, these can rise above ˜40 per cent. The comparison of results to previous studies also indicates that these seismic constraints are of critical importance for our knowledge of the structure of this star.
Zarafshan, Hadi; Khaleghi, Ali; Mohammadi, Mohammad Reza; Moeini, Mahdi; Malmir, Nastaran
2016-01-01
The aim of this study was to investigate electroencephalogram (EEG) dynamics using complexity analysis in children with attention-deficit/hyperactivity disorder (ADHD) compared with healthy control children when performing a cognitive task. Thirty 7-12-year-old children meeting Diagnostic and Statistical Manual of Mental Disorders-Fifth Edition (DSM-5) criteria for ADHD and 30 healthy control children underwent an EEG evaluation during a cognitive task, and Lempel-Ziv complexity (LZC) values were computed. There were no significant differences between ADHD and control groups on age and gender. The mean LZC of the ADHD children was significantly larger than healthy children over the right anterior and right posterior regions during the cognitive performance. In the ADHD group, complexity of the right hemisphere was higher than that of the left hemisphere, but the complexity of the left hemisphere was higher than that of the right hemisphere in the normal group. Although fronto-striatal dysfunction is considered conclusive evidence for the pathophysiology of ADHD, our arithmetic mental task has provided evidence of structural and functional changes in the posterior regions and probably cerebellum in ADHD.
Controlling quantum memory-assisted entropic uncertainty in non-Markovian environments
NASA Astrophysics Data System (ADS)
Zhang, Yanliang; Fang, Maofa; Kang, Guodong; Zhou, Qingping
2018-03-01
Quantum memory-assisted entropic uncertainty relation (QMA EUR) addresses that the lower bound of Maassen and Uffink's entropic uncertainty relation (without quantum memory) can be broken. In this paper, we investigated the dynamical features of QMA EUR in the Markovian and non-Markovian dissipative environments. It is found that dynamical process of QMA EUR is oscillation in non-Markovian environment, and the strong interaction is favorable for suppressing the amount of entropic uncertainty. Furthermore, we presented two schemes by means of prior weak measurement and posterior weak measurement reversal to control the amount of entropic uncertainty of Pauli observables in dissipative environments. The numerical results show that the prior weak measurement can effectively reduce the wave peak values of the QMA-EUA dynamic process in non-Markovian environment for long periods of time, but it is ineffectual on the wave minima of dynamic process. However, the posterior weak measurement reversal has an opposite effects on the dynamic process. Moreover, the success probability entirely depends on the quantum measurement strength. We hope that our proposal could be verified experimentally and might possibly have future applications in quantum information processing.
Bayesian calibration of the Community Land Model using surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi
2014-02-01
We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural errormore » in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.« less
ERIC Educational Resources Information Center
Sueiro, Manuel J.; Abad, Francisco J.
2011-01-01
The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…
Nested Sampling for Bayesian Model Comparison in the Context of Salmonella Disease Dynamics
Dybowski, Richard; McKinley, Trevelyan J.; Mastroeni, Pietro; Restif, Olivier
2013-01-01
Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike's Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered. PMID:24376528
Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations
NASA Astrophysics Data System (ADS)
Sandhu, Rimple; Poirel, Dominique; Pettit, Chris; Khalil, Mohammad; Sarkar, Abhijit
2016-07-01
A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid-structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic system leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib-Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.
Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandhu, Rimple; Poirel, Dominique; Pettit, Chris
2016-07-01
A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid–structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic systemmore » leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib–Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.« less
Exact Bayesian p-values for a test of independence in a 2 × 2 contingency table with missing data.
Lin, Yan; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett; Lipshultz, Steven
2017-01-01
Altham (Altham PME. Exact Bayesian analysis of a 2 × 2 contingency table, and Fisher's "exact" significance test. J R Stat Soc B 1969; 31: 261-269) showed that a one-sided p-value from Fisher's exact test of independence in a 2 × 2 contingency table is equal to the posterior probability of negative association in the 2 × 2 contingency table under a Bayesian analysis using an improper prior. We derive an extension of Fisher's exact test p-value in the presence of missing data, assuming the missing data mechanism is ignorable (i.e., missing at random or completely at random). Further, we propose Bayesian p-values for a test of independence in a 2 × 2 contingency table with missing data using alternative priors; we also present results from a simulation study exploring the Type I error rate and power of the proposed exact test p-values. An example, using data on the association between blood pressure and a cardiac enzyme, is presented to illustrate the methods.
Elastic K-means using posterior probability
Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris
2017-01-01
The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model. PMID:29240756
Kuselman, Ilya; Pennecchi, Francesca R; da Silva, Ricardo J N B; Hibbert, D Brynn
2017-11-01
The probability of a false decision on conformity of a multicomponent material due to measurement uncertainty is discussed when test results are correlated. Specification limits of the components' content of such a material generate a multivariate specification interval/domain. When true values of components' content and corresponding test results are modelled by multivariate distributions (e.g. by multivariate normal distributions), a total global risk of a false decision on the material conformity can be evaluated based on calculation of integrals of their joint probability density function. No transformation of the raw data is required for that. A total specific risk can be evaluated as the joint posterior cumulative function of true values of a specific batch or lot lying outside the multivariate specification domain, when the vector of test results, obtained for the lot, is inside this domain. It was shown, using a case study of four components under control in a drug, that the correlation influence on the risk value is not easily predictable. To assess this influence, the evaluated total risk values were compared with those calculated for independent test results and also with those assuming much stronger correlation than that observed. While the observed statistically significant correlation did not lead to a visible difference in the total risk values in comparison to the independent test results, the stronger correlation among the variables caused either the total risk decreasing or its increasing, depending on the actual values of the test results. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, D.; Liao, Q.
2016-12-01
The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of computational efficiency.
Baig, Aftab Ahmed Mirza; Ahmed, Syed Imran; Ali, Syed Shahzad; Rahmani, Asim; Siddiqui, Faizan
2018-01-01
Low back pain (LBP) is the foremost cause to hamper an individual's functional activities in Pakistan. Its impact on the quality of life and work routine makes it a major reason for therapeutic consultations. About 90% of the cases with LBP are non-specific. Various options are available for the treatment of LBP. Posterior-anterior vertebral mobilization, a manual therapy technique; and thermotherapy are used in clinical practice, however evidence to gauge their relative efficacy is yet to be synthesised. This study aimed to compare the effectiveness of posterior-anterior vertebral mobilization versus thermotherapy in the management of non-specific low back pain along with general stretching exercises. A randomised controlled trial with two-group pretest-posttest design was conducted at IPM&R, Dow University of Health Sciences (DUHS). A total of 60 Non-specific low back pain (NSLBP) patients with ages from 18 to 35 years were inducted through non-probability and purposive sampling technique. Baseline screening was done using an assessment form (Appendix-I). Subjects were allocated into two groups through systematic random sampling. Group-A (experimental group) received posterior-anterior vertebral mobilization with general stretching exercises while group B (control group) received thermotherapy with general stretching exercises. Pain and functional disability were assessed using NPRS and RMDQ respectively. Pre & post treatment scores were documented. A maximum drop-out rate of 20% was assumed. Recorded data were entered into SPSS V-19. Frequency and percentages were calculated for categorical variables. Intragroup and intergroup analyses were done using Wilcoxon signed ranked test and Mann-Whitney Test respectively. A P-value of 0.05 was considered statistically significant. Pre and post treatment analysis revealed that P-values for both pain and disability were less than 0.05, suggesting significant difference in NPRS and RMDQ scores. Whereas, median scores for both pain and disability were decreased by 75% in experimental group and 50% in control group. For inter group analysis p-values for both pain and disability were found to be less than 0.05. Both physiotherapeutic interventions, the PAVMs and thermotherapy, have significant effects on NSLBP in terms of relieving pain and improving functional disability. However PAVMs appeared to be more effective than thermotherapy.
Estimation from incomplete multinomial data. Ph.D. Thesis - Harvard Univ.
NASA Technical Reports Server (NTRS)
Credeur, K. R.
1978-01-01
The vector of multinomial cell probabilities was estimated from incomplete data, incomplete in that it contains partially classified observations. Each such partially classified observation was observed to fall in one of two or more selected categories but was not classified further into a single category. The data were assumed to be incomplete at random. The estimation criterion was minimization of risk for quadratic loss. The estimators were the classical maximum likelihood estimate, the Bayesian posterior mode, and the posterior mean. An approximation was developed for the posterior mean. The Dirichlet, the conjugate prior for the multinomial distribution, was assumed for the prior distribution.
Little Bayesians or Little Einsteins? Probability and Explanatory Virtue in Children's Inferences
ERIC Educational Resources Information Center
Johnston, Angie M.; Johnson, Samuel G. B.; Koven, Marissa L.; Keil, Frank C.
2017-01-01
Like scientists, children seek ways to explain causal systems in the world. But are children scientists in the strict Bayesian tradition of maximizing posterior probability? Or do they attend to other explanatory considerations, as laypeople and scientists--such as Einstein--do? Four experiments support the latter possibility. In particular, we…
Analysis of immune-related loci identifies 48 new susceptibility variants for multiple sclerosis
Beecham, Ashley H; Patsopoulos, Nikolaos A; Xifara, Dionysia K; Davis, Mary F; Kemppinen, Anu; Cotsapas, Chris; Shahi, Tejas S; Spencer, Chris; Booth, David; Goris, An; Oturai, Annette; Saarela, Janna; Fontaine, Bertrand; Hemmer, Bernhard; Martin, Claes; Zipp, Frauke; D’alfonso, Sandra; Martinelli-Boneschi, Filippo; Taylor, Bruce; Harbo, Hanne F; Kockum, Ingrid; Hillert, Jan; Olsson, Tomas; Ban, Maria; Oksenberg, Jorge R; Hintzen, Rogier; Barcellos, Lisa F; Agliardi, Cristina; Alfredsson, Lars; Alizadeh, Mehdi; Anderson, Carl; Andrews, Robert; Søndergaard, Helle Bach; Baker, Amie; Band, Gavin; Baranzini, Sergio E; Barizzone, Nadia; Barrett, Jeffrey; Bellenguez, Céline; Bergamaschi, Laura; Bernardinelli, Luisa; Berthele, Achim; Biberacher, Viola; Binder, Thomas M C; Blackburn, Hannah; Bomfim, Izaura L; Brambilla, Paola; Broadley, Simon; Brochet, Bruno; Brundin, Lou; Buck, Dorothea; Butzkueven, Helmut; Caillier, Stacy J; Camu, William; Carpentier, Wassila; Cavalla, Paola; Celius, Elisabeth G; Coman, Irène; Comi, Giancarlo; Corrado, Lucia; Cosemans, Leentje; Cournu-Rebeix, Isabelle; Cree, Bruce A C; Cusi, Daniele; Damotte, Vincent; Defer, Gilles; Delgado, Silvia R; Deloukas, Panos; di Sapio, Alessia; Dilthey, Alexander T; Donnelly, Peter; Dubois, Bénédicte; Duddy, Martin; Edkins, Sarah; Elovaara, Irina; Esposito, Federica; Evangelou, Nikos; Fiddes, Barnaby; Field, Judith; Franke, Andre; Freeman, Colin; Frohlich, Irene Y; Galimberti, Daniela; Gieger, Christian; Gourraud, Pierre-Antoine; Graetz, Christiane; Graham, Andrew; Grummel, Verena; Guaschino, Clara; Hadjixenofontos, Athena; Hakonarson, Hakon; Halfpenny, Christopher; Hall, Gillian; Hall, Per; Hamsten, Anders; Harley, James; Harrower, Timothy; Hawkins, Clive; Hellenthal, Garrett; Hillier, Charles; Hobart, Jeremy; Hoshi, Muni; Hunt, Sarah E; Jagodic, Maja; Jelčić, Ilijas; Jochim, Angela; Kendall, Brian; Kermode, Allan; Kilpatrick, Trevor; Koivisto, Keijo; Konidari, Ioanna; Korn, Thomas; Kronsbein, Helena; Langford, Cordelia; Larsson, Malin; Lathrop, Mark; Lebrun-Frenay, Christine; Lechner-Scott, Jeannette; Lee, Michelle H; Leone, Maurizio A; Leppä, Virpi; Liberatore, Giuseppe; Lie, Benedicte A; Lill, Christina M; Lindén, Magdalena; Link, Jenny; Luessi, Felix; Lycke, Jan; Macciardi, Fabio; Männistö, Satu; Manrique, Clara P; Martin, Roland; Martinelli, Vittorio; Mason, Deborah; Mazibrada, Gordon; McCabe, Cristin; Mero, Inger-Lise; Mescheriakova, Julia; Moutsianas, Loukas; Myhr, Kjell-Morten; Nagels, Guy; Nicholas, Richard; Nilsson, Petra; Piehl, Fredrik; Pirinen, Matti; Price, Siân E; Quach, Hong; Reunanen, Mauri; Robberecht, Wim; Robertson, Neil P; Rodegher, Mariaemma; Rog, David; Salvetti, Marco; Schnetz-Boutaud, Nathalie C; Sellebjerg, Finn; Selter, Rebecca C; Schaefer, Catherine; Shaunak, Sandip; Shen, Ling; Shields, Simon; Siffrin, Volker; Slee, Mark; Sorensen, Per Soelberg; Sorosina, Melissa; Sospedra, Mireia; Spurkland, Anne; Strange, Amy; Sundqvist, Emilie; Thijs, Vincent; Thorpe, John; Ticca, Anna; Tienari, Pentti; van Duijn, Cornelia; Visser, Elizabeth M; Vucic, Steve; Westerlind, Helga; Wiley, James S; Wilkins, Alastair; Wilson, James F; Winkelmann, Juliane; Zajicek, John; Zindler, Eva; Haines, Jonathan L; Pericak-Vance, Margaret A; Ivinson, Adrian J; Stewart, Graeme; Hafler, David; Hauser, Stephen L; Compston, Alastair; McVean, Gil; De Jager, Philip; Sawcer, Stephen; McCauley, Jacob L
2013-01-01
Using the ImmunoChip custom genotyping array, we analysed 14,498 multiple sclerosis subjects and 24,091 healthy controls for 161,311 autosomal variants and identified 135 potentially associated regions (p-value < 1.0 × 10-4). In a replication phase, we combined these data with previous genome-wide association study (GWAS) data from an independent 14,802 multiple sclerosis subjects and 26,703 healthy controls. In these 80,094 individuals of European ancestry we identified 48 new susceptibility variants (p-value < 5.0 × 10-8); three found after conditioning on previously identified variants. Thus, there are now 110 established multiple sclerosis risk variants in 103 discrete loci outside of the Major Histocompatibility Complex. With high resolution Bayesian fine-mapping, we identified five regions where one variant accounted for more than 50% of the posterior probability of association. This study enhances the catalogue of multiple sclerosis risk variants and illustrates the value of fine-mapping in the resolution of GWAS signals. PMID:24076602
Planning spatial sampling of the soil from an uncertain reconnaissance variogram
NASA Astrophysics Data System (ADS)
Lark, R. Murray; Hamilton, Elliott M.; Kaninga, Belinda; Maseka, Kakoma K.; Mutondo, Moola; Sakala, Godfrey M.; Watts, Michael J.
2017-12-01
An estimated variogram of a soil property can be used to support a rational choice of sampling intensity for geostatistical mapping. However, it is known that estimated variograms are subject to uncertainty. In this paper we address two practical questions. First, how can we make a robust decision on sampling intensity, given the uncertainty in the variogram? Second, what are the costs incurred in terms of oversampling because of uncertainty in the variogram model used to plan sampling? To achieve this we show how samples of the posterior distribution of variogram parameters, from a computational Bayesian analysis, can be used to characterize the effects of variogram parameter uncertainty on sampling decisions. We show how one can select a sample intensity so that a target value of the kriging variance is not exceeded with some specified probability. This will lead to oversampling, relative to the sampling intensity that would be specified if there were no uncertainty in the variogram parameters. One can estimate the magnitude of this oversampling by treating the tolerable grid spacing for the final sample as a random variable, given the target kriging variance and the posterior sample values. We illustrate these concepts with some data on total uranium content in a relatively sparse sample of soil from agricultural land near mine tailings in the Copperbelt Province of Zambia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sigeti, David E.; Pelak, Robert A.
We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis withmore » an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a general beta-function prior for {theta}, enabling sequential analysis in which a small number of new simulations may be done and the resulting posterior for {theta} used as a prior to inform the next stage of power analysis.« less
Lovelock, Paul K; Spurdle, Amanda B; Mok, Myth T S; Farrugia, Daniel J; Lakhani, Sunil R; Healey, Sue; Arnold, Stephen; Buchanan, Daniel; Couch, Fergus J; Henderson, Beric R; Goldgar, David E; Tavtigian, Sean V; Chenevix-Trench, Georgia; Brown, Melissa A
2007-01-01
Many of the DNA sequence variants identified in the breast cancer susceptibility gene BRCA1 remain unclassified in terms of their potential pathogenicity. Both multifactorial likelihood analysis and functional approaches have been proposed as a means to elucidate likely clinical significance of such variants, but analysis of the comparative value of these methods for classifying all sequence variants has been limited. We have compared the results from multifactorial likelihood analysis with those from several functional analyses for the four BRCA1 sequence variants A1708E, G1738R, R1699Q, and A1708V. Our results show that multifactorial likelihood analysis, which incorporates sequence conservation, co-inheritance, segregation, and tumour immunohistochemical analysis, may improve classification of variants. For A1708E, previously shown to be functionally compromised, analysis of oestrogen receptor, cytokeratin 5/6, and cytokeratin 14 tumour expression data significantly strengthened the prediction of pathogenicity, giving a posterior probability of pathogenicity of 99%. For G1738R, shown to be functionally defective in this study, immunohistochemistry analysis confirmed previous findings of inconsistent 'BRCA1-like' phenotypes for the two tumours studied, and the posterior probability for this variant was 96%. The posterior probabilities of R1699Q and A1708V were 54% and 69%, respectively, only moderately suggestive of increased risk. Interestingly, results from functional analyses suggest that both of these variants have only partial functional activity. R1699Q was defective in foci formation in response to DNA damage and displayed intermediate transcriptional transactivation activity but showed no evidence for centrosome amplification. In contrast, A1708V displayed an intermediate transcriptional transactivation activity and a normal foci formation response in response to DNA damage but induced centrosome amplification. These data highlight the need for a range of functional studies to be performed in order to identify variants with partially compromised function. The results also raise the possibility that A1708V and R1699Q may be associated with a low or moderate risk of cancer. While data pooling strategies may provide more information for multifactorial analysis to improve the interpretation of the clinical significance of these variants, it is likely that the development of current multifactorial likelihood approaches and the consideration of alternative statistical approaches will be needed to determine whether these individually rare variants do confer a low or moderate risk of breast cancer.
Piéron’s Law and Optimal Behavior in Perceptual Decision-Making
van Maanen, Leendert; Grasman, Raoul P. P. P.; Forstmann, Birte U.; Wagenmakers, Eric-Jan
2012-01-01
Piéron’s Law is a psychophysical regularity in signal detection tasks that states that mean response times decrease as a power function of stimulus intensity. In this article, we extend Piéron’s Law to perceptual two-choice decision-making tasks, and demonstrate that the law holds as the discriminability between two competing choices is manipulated, even though the stimulus intensity remains constant. This result is consistent with predictions from a Bayesian ideal observer model. The model assumes that in order to respond optimally in a two-choice decision-making task, participants continually update the posterior probability of each response alternative, until the probability of one alternative crosses a criterion value. In addition to predictions for two-choice decision-making tasks, we extend the ideal observer model to predict Piéron’s Law in signal detection tasks. We conclude that Piéron’s Law is a general phenomenon that may be caused by optimality constraints. PMID:22232572
Comparison of sampling techniques for Bayesian parameter estimation
NASA Astrophysics Data System (ADS)
Allison, Rupert; Dunkley, Joanna
2014-02-01
The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.
Efficient Posterior Probability Mapping Using Savage-Dickey Ratios
Penny, William D.; Ridgway, Gerard R.
2013-01-01
Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner. PMID:23533640
NASA Astrophysics Data System (ADS)
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
Bayesian analysis of the astrobiological implications of life’s early emergence on Earth
Spiegel, David S.; Turner, Edwin L.
2012-01-01
Life arose on Earth sometime in the first few hundred million years after the young planet had cooled to the point that it could support water-based organisms on its surface. The early emergence of life on Earth has been taken as evidence that the probability of abiogenesis is high, if starting from young Earth-like conditions. We revisit this argument quantitatively in a Bayesian statistical framework. By constructing a simple model of the probability of abiogenesis, we calculate a Bayesian estimate of its posterior probability, given the data that life emerged fairly early in Earth’s history and that, billions of years later, curious creatures noted this fact and considered its implications. We find that, given only this very limited empirical information, the choice of Bayesian prior for the abiogenesis probability parameter has a dominant influence on the computed posterior probability. Although terrestrial life's early emergence provides evidence that life might be abundant in the universe if early-Earth-like conditions are common, the evidence is inconclusive and indeed is consistent with an arbitrarily low intrinsic probability of abiogenesis for plausible uninformative priors. Finding a single case of life arising independently of our lineage (on Earth, elsewhere in the solar system, or on an extrasolar planet) would provide much stronger evidence that abiogenesis is not extremely rare in the universe. PMID:22198766
Bayesian analysis of the astrobiological implications of life's early emergence on Earth.
Spiegel, David S; Turner, Edwin L
2012-01-10
Life arose on Earth sometime in the first few hundred million years after the young planet had cooled to the point that it could support water-based organisms on its surface. The early emergence of life on Earth has been taken as evidence that the probability of abiogenesis is high, if starting from young Earth-like conditions. We revisit this argument quantitatively in a bayesian statistical framework. By constructing a simple model of the probability of abiogenesis, we calculate a bayesian estimate of its posterior probability, given the data that life emerged fairly early in Earth's history and that, billions of years later, curious creatures noted this fact and considered its implications. We find that, given only this very limited empirical information, the choice of bayesian prior for the abiogenesis probability parameter has a dominant influence on the computed posterior probability. Although terrestrial life's early emergence provides evidence that life might be abundant in the universe if early-Earth-like conditions are common, the evidence is inconclusive and indeed is consistent with an arbitrarily low intrinsic probability of abiogenesis for plausible uninformative priors. Finding a single case of life arising independently of our lineage (on Earth, elsewhere in the solar system, or on an extrasolar planet) would provide much stronger evidence that abiogenesis is not extremely rare in the universe.
Value of Information Analysis for Time-lapse Seismic Data by Simulation-Regression
NASA Astrophysics Data System (ADS)
Dutta, G.; Mukerji, T.; Eidsvik, J.
2016-12-01
A novel method to estimate the Value of Information (VOI) of time-lapse seismic data in the context of reservoir development is proposed. VOI is a decision analytic metric quantifying the incremental value that would be created by collecting information prior to making a decision under uncertainty. The VOI has to be computed before collecting the information and can be used to justify its collection. Previous work on estimating the VOI of geophysical data has involved explicit approximation of the posterior distribution of reservoir properties given the data and then evaluating the prospect values for that posterior distribution of reservoir properties. Here, we propose to directly estimate the prospect values given the data by building a statistical relationship between them using regression. Various regression techniques such as Partial Least Squares Regression (PLSR), Multivariate Adaptive Regression Splines (MARS) and k-Nearest Neighbors (k-NN) are used to estimate the VOI, and the results compared. For a univariate Gaussian case, the VOI obtained from simulation-regression has been shown to be close to the analytical solution. Estimating VOI by simulation-regression is much less computationally expensive since the posterior distribution of reservoir properties given each possible dataset need not be modeled and the prospect values need not be evaluated for each such posterior distribution of reservoir properties. This method is flexible, since it does not require rigid model specification of posterior but rather fits conditional expectations non-parametrically from samples of values and data.
NASA Astrophysics Data System (ADS)
Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby
2013-12-01
This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.
The known unknowns: neural representation of second-order uncertainty, and ambiguity
Bach, Dominik R.; Hulme, Oliver; Penny, William D.; Dolan, Raymond J.
2011-01-01
Predictions provided by action-outcome probabilities entail a degree of (first-order) uncertainty. However, these probabilities themselves can be imprecise and embody second-order uncertainty. Tracking second-order uncertainty is important for optimal decision making and reinforcement learning. Previous functional magnetic resonance imaging investigations of second-order uncertainty in humans have drawn on an economic concept of ambiguity, where action-outcome associations in a gamble are either known (unambiguous) or completely unknown (ambiguous). Here, we relaxed the constraints associated with a purely categorical concept of ambiguity and varied the second-order uncertainty of gambles continuously, quantified as entropy over second-order probabilities. We show that second-order uncertainty influences decisions in a pessimistic way by biasing second-order probabilities, and that second-order uncertainty is negatively correlated with posterior cingulate cortex activity. The category of ambiguous (compared to non-ambiguous) gambles also biased choice in a similar direction, but was associated with distinct activation of a posterior parietal cortical area; an activation that we show reflects a different computational mechanism. Our findings indicate that behavioural and neural responses to second-order uncertainty are distinct from those associated with ambiguity and may call for a reappraisal of previous data. PMID:21451019
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
LaPrade, Robert F; Ho, Charles P; James, Evan; Crespo, Bernardo; LaPrade, Christopher M; Matheny, Lauren M
2015-01-01
The purpose of this study was to determine the diagnostic accuracy of 3 T MRI, including sensitivity, specificity, negative and positive predictive values, for detection of posterior medial and lateral meniscus root tears and avulsions. All patients who had a 3 T MRI of the knee, followed by arthroscopic surgery, were included in this study. Arthroscopy was considered the gold standard. Meniscus root tears diagnosed at arthroscopy and on MRI were defined as a complete meniscus root detachment within 9 mm of the root. All surgical data were collected prospectively and stored in a data registry. MRI exams were reported prospectively by a musculoskeletal radiologist and reviewed retrospectively. There were 287 consecutive patients (156 males, 131 females; mean age 41.7 years) in this study. Prevalence of meniscus posterior root tears identified at arthroscopy was 9.1, 5.9% for medial and 3.5% for lateral root tears (one patient had both). Sensitivity was 0.770 (95% CI 0.570, 0.901), specificity was 0.729 (95% CI 0.708, 0.741), positive predictive value was 0.220 (95% CI 0.163, 0.257) and negative predictive value was 0.970 (95% CI 0.943, 0.987). For medial root tears, sensitivity was 0.824 (95% CI 0.569, 0.953), specificity was 0.800 (95% CI 0.784, 0.808), positive predictive value was 0.206 (95% CI 0.142, 0.238) and negative predictive value was 0.986 (95% CI 0.967, 0.996). For lateral meniscus posterior root tears, sensitivity was 0.600 (95% CI 0.281, 0.860), specificity was 0.903 (95% CI 0.891, 0.912), positive predictive value was 0.181 (95% CI 0.085, 0.261) and negative predictive value was 0.984 (95% CI 0.972, 0.994). This study demonstrated moderate sensitivity and specificity of 3 T MRI to detect posterior meniscus root tears. The negative predictive value of 3 T MRI to detect posterior meniscus root tears was high; however, the positive predictive value was low. Sensitivity was higher for medial root tears, indicating a higher risk of missing lateral root tears on MRI. Imaging has an important role in identifying meniscus posterior horn root tears; however, some root tears may not be identified until arthroscopy. Prognostic study (diagnostic), Level II.
Markolf, K L; Kochan, A; Amstutz, H C
1984-02-01
Thirty-five patients with documented absence of the anterior cruciate ligament were tested on the University of California, Los Angeles, instrumented clinical knee-testing apparatus and we measured the response curves for the following testing modes: anterior-posterior force versus displacement at full extension and at 20 and 90 degrees of flexion; varus-valgus moment versus angulation at full extension and 20 degrees of flexion; and tibial torque versus rotation at 20 degrees of flexion. Absolute values of stiffness and laxity and right-left differences for these injured knees were compared with identical quantities measured previously for a control population of forty-nine normal subjects with no history of treatment for injury to the knee. For both the uninjured knees and the knees without an anterior cruciate ligament, at 20 and 90 degrees of flexion the anterior-posterior laxity was greatest at approximately 15 degrees of external rotation of the foot. The injured knees demonstrated significantly increased total anterior-posterior laxity and decreased anterior stiffness when compared with the uninjured knees in all tested positions of the foot and knee. The mean increase in paired anterior-posterior laxity for the injured knees in this group of patients at +/- 200 newtons of applied anterior-posterior force was 3.1 millimeters (+39 per cent) at full extension, 5.5 millimeters (+57 per cent) at 20 degrees of flexion, and 2.5 millimeters (+34 per cent) at 90 degrees of flexion. The mean reduction in anterior stiffness for injured knees was also greatest (-54 per cent) at 20 degrees of knee flexion. Only slight reduction in posterior stiffness (-16 per cent) was measured at 20 degrees of flexion, and this probably reflected the presence of associated capsular and meniscal injuries. In the group of anterior cruciate-deficient knees, the patients with an absent medial meniscus showed greater total anterior-posterior laxity in all three positions of knee flexion than did the patients with an intact or torn meniscus. Varus-valgus laxity at full extension increased an average of 1.7 degrees (+36 per cent) for the injured knees, while varus and valgus stiffness decreased 21 per cent and 24 per cent. Absence of the medial meniscus (in a knee with absence of the anterior cruciate ligament) increased varus-valgus laxity at zero and 20 degrees of flexion.(ABSTRACT TRUNCATED AT 400 WORDS)
Expert Financial Advice Neurobiologically “Offloads” Financial Decision-Making under Risk
Engelmann, Jan B.; Capra, C. Monica; Noussair, Charles; Berns, Gregory S.
2009-01-01
Background Financial advice from experts is commonly sought during times of uncertainty. While the field of neuroeconomics has made considerable progress in understanding the neurobiological basis of risky decision-making, the neural mechanisms through which external information, such as advice, is integrated during decision-making are poorly understood. In the current experiment, we investigated the neurobiological basis of the influence of expert advice on financial decisions under risk. Methodology/Principal Findings While undergoing fMRI scanning, participants made a series of financial choices between a certain payment and a lottery. Choices were made in two conditions: 1) advice from a financial expert about which choice to make was displayed (MES condition); and 2) no advice was displayed (NOM condition). Behavioral results showed a significant effect of expert advice. Specifically, probability weighting functions changed in the direction of the expert's advice. This was paralleled by neural activation patterns. Brain activations showing significant correlations with valuation (parametric modulation by value of lottery/sure win) were obtained in the absence of the expert's advice (NOM) in intraparietal sulcus, posterior cingulate cortex, cuneus, precuneus, inferior frontal gyrus and middle temporal gyrus. Notably, no significant correlations with value were obtained in the presence of advice (MES). These findings were corroborated by region of interest analyses. Neural equivalents of probability weighting functions showed significant flattening in the MES compared to the NOM condition in regions associated with probability weighting, including anterior cingulate cortex, dorsolateral PFC, thalamus, medial occipital gyrus and anterior insula. Finally, during the MES condition, significant activations in temporoparietal junction and medial PFC were obtained. Conclusions/Significance These results support the hypothesis that one effect of expert advice is to “offload” the calculation of value of decision options from the individual's brain. PMID:19308261
Aerosol-type retrieval and uncertainty quantification from OMI data
NASA Astrophysics Data System (ADS)
Kauppi, Anu; Kolmonen, Pekka; Laine, Marko; Tamminen, Johanna
2017-11-01
We discuss uncertainty quantification for aerosol-type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses precalculated aerosol microphysical models stored in look-up tables (LUTs) and top-of-atmosphere (TOA) spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modelled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval.This paper focuses on the aerosol microphysical model selection and characterisation of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD). The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach, in which all uncertainties are described as a posterior probability distribution. When there is no single best-matching aerosol microphysical model, we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best-fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best-matching models of a certain main aerosol type in order to quantify how plausible it is that it represents the underlying atmospheric aerosol conditions.The developed method is applied to Ozone Monitoring Instrument (OMI) measurements using a multiwavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate the robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the difficulty in model selection. The posterior probability distribution can provide a comprehensive characterisation of the uncertainty in this kind of problem for aerosol-type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.
Optimal observation network design for conceptual model discrimination and uncertainty reduction
NASA Astrophysics Data System (ADS)
Pham, Hai V.; Tsai, Frank T.-C.
2016-02-01
This study expands the Box-Hill discrimination function to design an optimal observation network to discriminate conceptual models and, in turn, identify a most favored model. The Box-Hill discrimination function measures the expected decrease in Shannon entropy (for model identification) before and after the optimal design for one additional observation. This study modifies the discrimination function to account for multiple future observations that are assumed spatiotemporally independent and Gaussian-distributed. Bayesian model averaging (BMA) is used to incorporate existing observation data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. In addition, the BMA method is adopted to predict future observation data in a statistical sense. The design goal is to find optimal locations and least data via maximizing the Box-Hill discrimination function value subject to a posterior model probability threshold. The optimal observation network design is illustrated using a groundwater study in Baton Rouge, Louisiana, to collect additional groundwater heads from USGS wells. The sources of uncertainty creating multiple groundwater models are geological architecture, boundary condition, and fault permeability architecture. Impacts of considering homoscedastic and heteroscedastic future observation data and the sources of uncertainties on potential observation areas are analyzed. Results show that heteroscedasticity should be considered in the design procedure to account for various sources of future observation uncertainty. After the optimal design is obtained and the corresponding data are collected for model updating, total variances of head predictions can be significantly reduced by identifying a model with a superior posterior model probability.
Bayesian Model Selection in Geophysics: The evidence
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2016-12-01
Bayesian inference has found widespread application and use in science and engineering to reconcile Earth system models with data, including prediction in space (interpolation), prediction in time (forecasting), assimilation of observations and deterministic/stochastic model output, and inference of the model parameters. Per Bayes theorem, the posterior probability, , P(H|D), of a hypothesis, H, given the data D, is equivalent to the product of its prior probability, P(H), and likelihood, L(H|D), divided by a normalization constant, P(D). In geophysics, the hypothesis, H, often constitutes a description (parameterization) of the subsurface for some entity of interest (e.g. porosity, moisture content). The normalization constant, P(D), is not required for inference of the subsurface structure, yet of great value for model selection. Unfortunately, it is not particularly easy to estimate P(D) in practice. Here, I will introduce the various building blocks of a general purpose method which provides robust and unbiased estimates of the evidence, P(D). This method uses multi-dimensional numerical integration of the posterior (parameter) distribution. I will then illustrate this new estimator by application to three competing subsurface models (hypothesis) using GPR travel time data from the South Oyster Bacterial Transport Site, in Virginia, USA. The three subsurface models differ in their treatment of the porosity distribution and use (a) horizontal layering with fixed layer thicknesses, (b) vertical layering with fixed layer thicknesses and (c) a multi-Gaussian field. The results of the new estimator are compared against the brute force Monte Carlo method, and the Laplace-Metropolis method.
Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign
2007-01-01
Background Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds) that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for pairwise RNA structure prediction methods in a principled fashion. These constraints can reduce the computational and memory requirements of these methods while maintaining or improving their accuracy of structural prediction. This extends the practical reach of these methods to longer length sequences. The revised Dynalign code is freely available for download. PMID:17445273
Neitlich, Peter N; Ver Hoef, Jay M; Berryman, Shanti D; Mines, Anaka; Geiser, Linda H; Hasselbach, Linda M; Shiel, Alyssa E
2017-01-01
Spatial patterns of Zn, Pb and Cd deposition in Cape Krusenstern National Monument (CAKR), Alaska, adjacent to the Red Dog Mine haul road, were characterized in 2001 and 2006 using Hylocomium moss tissue as a biomonitor. Elevated concentrations of Cd, Pb, and Zn in moss tissue decreased logarithmically away from the haul road and the marine port. The metals concentrations in the two years were compared using Bayesian posterior predictions on a new sampling grid to which both data sets were fit. Posterior predictions were simulated 200 times both on a coarse grid of 2,357 points and by distance-based strata including subsets of these points. Compared to 2001, Zn and Pb concentrations in 2006 were 31 to 54% lower in the 3 sampling strata closest to the haul road (0-100, 100-2000 and 2000-4000 m). Pb decreased by 40% in the stratum 4,000-5,000 m from the haul road. Cd decreased significantly by 38% immediately adjacent to the road (0-100m), had an 89% probability of a small decrease 100-2000 m from the road, and showed moderate probabilities (56-71%) for increase at greater distances. There was no significant change over time (with probabilities all ≤ 85%) for any of the 3 elements in more distant reference areas (40-60 km). As in 2001, elemental concentrations in 2006 were higher on the north side of the road. Reductions in deposition have followed a large investment in infrastructure to control fugitive dust escapement at the mine and port sites, operational controls, and road dust mitigation. Fugitive dust escapement, while much reduced, is still resulting in elevated concentrations of Zn, Pb and Cd out to 5,000 m from the haul road. Zn and Pb levels were slightly above arctic baseline values in southern CAKR reference areas.
Ver Hoef, Jay M.; Berryman, Shanti D.; Mines, Anaka; Geiser, Linda H.; Hasselbach, Linda M.; Shiel, Alyssa E.
2017-01-01
Spatial patterns of Zn, Pb and Cd deposition in Cape Krusenstern National Monument (CAKR), Alaska, adjacent to the Red Dog Mine haul road, were characterized in 2001 and 2006 using Hylocomium moss tissue as a biomonitor. Elevated concentrations of Cd, Pb, and Zn in moss tissue decreased logarithmically away from the haul road and the marine port. The metals concentrations in the two years were compared using Bayesian posterior predictions on a new sampling grid to which both data sets were fit. Posterior predictions were simulated 200 times both on a coarse grid of 2,357 points and by distance-based strata including subsets of these points. Compared to 2001, Zn and Pb concentrations in 2006 were 31 to 54% lower in the 3 sampling strata closest to the haul road (0–100, 100–2000 and 2000–4000 m). Pb decreased by 40% in the stratum 4,000–5,000 m from the haul road. Cd decreased significantly by 38% immediately adjacent to the road (0–100m), had an 89% probability of a small decrease 100–2000 m from the road, and showed moderate probabilities (56–71%) for increase at greater distances. There was no significant change over time (with probabilities all ≤ 85%) for any of the 3 elements in more distant reference areas (40–60 km). As in 2001, elemental concentrations in 2006 were higher on the north side of the road. Reductions in deposition have followed a large investment in infrastructure to control fugitive dust escapement at the mine and port sites, operational controls, and road dust mitigation. Fugitive dust escapement, while much reduced, is still resulting in elevated concentrations of Zn, Pb and Cd out to 5,000 m from the haul road. Zn and Pb levels were slightly above arctic baseline values in southern CAKR reference areas. PMID:28542369
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-04-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.
Inferring probabilistic stellar rotation periods using Gaussian processes
NASA Astrophysics Data System (ADS)
Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh
2018-02-01
Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.
A Bayesian Framework of Uncertainties Integration in 3D Geological Model
NASA Astrophysics Data System (ADS)
Liang, D.; Liu, X.
2017-12-01
3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.
Shi, Haolun; Yin, Guosheng
2018-02-21
Simon's two-stage design is one of the most commonly used methods in phase II clinical trials with binary endpoints. The design tests the null hypothesis that the response rate is less than an uninteresting level, versus the alternative hypothesis that the response rate is greater than a desirable target level. From a Bayesian perspective, we compute the posterior probabilities of the null and alternative hypotheses given that a promising result is declared in Simon's design. Our study reveals that because the frequentist hypothesis testing framework places its focus on the null hypothesis, a potentially efficacious treatment identified by rejecting the null under Simon's design could have only less than 10% posterior probability of attaining the desirable target level. Due to the indifference region between the null and alternative, rejecting the null does not necessarily mean that the drug achieves the desirable response level. To clarify such ambiguity, we propose a Bayesian enhancement two-stage (BET) design, which guarantees a high posterior probability of the response rate reaching the target level, while allowing for early termination and sample size saving in case that the drug's response rate is smaller than the clinically uninteresting level. Moreover, the BET design can be naturally adapted to accommodate survival endpoints. We conduct extensive simulation studies to examine the empirical performance of our design and present two trial examples as applications. © 2018, The International Biometric Society.
Reversible posterior leucoencephalopathy syndrome associated with bone marrow transplantation.
Teive, H A; Brandi, I V; Camargo, C H; Bittencourt, M A; Bonfim, C M; Friedrich, M L; de Medeiros, C R; Werneck, L C; Pasquini, R
2001-09-01
Reversible posterior leucoencephalopathy syndrome (RPLS) has previously been described in patients who have renal insufficiency, eclampsia, hypertensive encephalopathy and patients receiving immunosuppressive therapy. The mechanism by which immunosuppressive agents can cause this syndrome is not clear, but it is probably related with cytotoxic effects of these agents on the vascular endothelium. We report eight patients who received cyclosporine A (CSA) after allogeneic bone marrow transplantation or as treatment for severe aplastic anemia (SSA) who developed posterior leucoencephalopathy. The most common signs and symptoms were seizures and headache. Neurological dysfunction occurred preceded by or concomitant with high blood pressure and some degree of acute renal failure in six patients. Computerized tomography studies showed low-density white matter lesions involving the posterior areas of cerebral hemispheres. Symptoms and neuroimaging abnormalities were reversible and improvement occurred in all patients when given lower doses of CSA or when the drug was withdrawn. RPLS may be considered an expression of CSA neurotoxicity.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
Pierce, Jordan E; McDowell, Jennifer E
2016-02-01
Cognitive control supports flexible behavior adapted to meet current goals and can be modeled through investigation of saccade tasks with varying cognitive demands. Basic prosaccades (rapid glances toward a newly appearing stimulus) are supported by neural circuitry, including occipital and posterior parietal cortex, frontal and supplementary eye fields, and basal ganglia. These trials can be contrasted with complex antisaccades (glances toward the mirror image location of a stimulus), which are characterized by greater functional magnetic resonance imaging (MRI) blood oxygenation level-dependent (BOLD) signal in the aforementioned regions and recruitment of additional regions such as dorsolateral prefrontal cortex. The current study manipulated the cognitive demands of these saccade tasks by presenting three rapid event-related runs of mixed saccades with a varying probability of antisaccade vs. prosaccade trials (25, 50, or 75%). Behavioral results showed an effect of trial-type probability on reaction time, with slower responses in runs with a high antisaccade probability. Imaging results exhibited an effect of probability in bilateral pre- and postcentral gyrus, bilateral superior temporal gyrus, and medial frontal gyrus. Additionally, the interaction between saccade trial type and probability revealed a strong probability effect for prosaccade trials, showing a linear increase in activation parallel to antisaccade probability in bilateral temporal/occipital, posterior parietal, medial frontal, and lateral prefrontal cortex. In contrast, antisaccade trials showed elevated activation across all runs. Overall, this study demonstrated that improbable performance of a typically simple prosaccade task led to augmented BOLD signal to support changing cognitive control demands, resulting in activation levels similar to the more complex antisaccade task. Copyright © 2016 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Sheng, Zheng
2013-02-01
The estimation of lower atmospheric refractivity from radar sea clutter (RFC) is a complicated nonlinear optimization problem. This paper deals with the RFC problem in a Bayesian framework. It uses the unbiased Markov Chain Monte Carlo (MCMC) sampling technique, which can provide accurate posterior probability distributions of the estimated refractivity parameters by using an electromagnetic split-step fast Fourier transform terrain parabolic equation propagation model within a Bayesian inversion framework. In contrast to the global optimization algorithm, the Bayesian—MCMC can obtain not only the approximate solutions, but also the probability distributions of the solutions, that is, uncertainty analyses of solutions. The Bayesian—MCMC algorithm is implemented on the simulation radar sea-clutter data and the real radar sea-clutter data. Reference data are assumed to be simulation data and refractivity profiles are obtained using a helicopter. The inversion algorithm is assessed (i) by comparing the estimated refractivity profiles from the assumed simulation and the helicopter sounding data; (ii) the one-dimensional (1D) and two-dimensional (2D) posterior probability distribution of solutions.
Cağlar, E; Şahin, G; Oğur, T; Aktaş, E
2014-11-01
To identify changes in knee joint cartilage transverse relaxation values depending on the patient's age and gender and to investigate the relationship between knee joint pathologies and the transverse relaxation time. Knee MRI images of 107 symptomatic patients with various pathologic knee conditions were analyzed retrospectively. T2 values were measured at patellar cartilage, posteromedial and posterolateral femoral cartilage adjacent to the central horn of posterior meniscus. 963 measurements were done for 107 knees MRI. Relationship of T2 values with seven features including subarticular bone marrow edema, subarticular cysts, marginal osteophytes, anterior-posterior cruciate and collateral ligament tears, posterior medial and posterior lateral meniscal tears, synovial thickening and effusion were analyzed. T2 values in all three compartments were evaluated according to age and gender. A T2 value increase correlated with age was present in all three compartments measured in the subgroup with no knee joint pathology and in all patient groups. According to the ROC curve, an increase showing a statistically significant difference was present in the patient group aged over 40 compared to the patient group aged 40 and below in all patient groups. There is a statistically difference at T2 values with and without subarticular cysts, marginal osteophytes, synovial thickening and effusion. T2 relaxation time showed a statistically significant increase in the patients with a medial meniscus tear compared to those without a tear and no statistically significant difference was found in T2 relaxation times of patients with and without a posterior lateral meniscus tear. T2 cartilage mapping on MRI provides opportunity to exhibit biochemical and structural changes related with cartilage extracellular matrix without using invasive diagnostic methods.
Santos, S A; Fermino, F; Moreira, B M T; Araujo, K F; Falco, J R P; Ruvolo-Takasusuki, M C C
2014-09-29
The sugarcane borer Diatraea saccharalis is widely known as the main pest of sugarcane crop, causing increased damage to the entire fields. Measures to control this pest involve the use of chemicals and biological control with Cotesia flavipes wasps. In this study, we evaluated the insecticides fipronil (Frontline; 0.0025%), malathion (Malatol Bio Carb; 0.4%), cipermetrina (Galgotrin; 10%), and neem oil (Natuneem; 100%) and the herbicide nicosulfuron (Sanson 40 SC; 100%) in the posterior region silk glands of 3rd- and 5th-instar D. saccharalis by studying the variation in the critical electrolyte concentration (CEC). Observations of 3rd-instar larvae indicated that malathion, cipermetrina, and neem oil induced increased chromatin condensation that may consequently disable genes. Tests with fipronil showed no alteration in chromatin condensation. With the use of nicosulfuron, there was chromatin and probable gene decompaction. In the 5th-instar larvae, the larval CEC values indicated that malathion and neem oil induced increased chromatin condensation. The CEC values for 5th-instar larvae using cipermetrina, fipronil, and nicosulfuron indicated chromatin unpacking. These observations led us to conclude that the quantity of the pesticide does not affect the mortality of these pests, can change the conformation of complexes of DNA, RNA, and protein from the posterior region of silk gland cells of D. saccharalis, activating or repressing the expression of genes related to the defense mechanism of the insect and contributing to the selection and survival of resistant individuals.
Hu, Zhiyong; Liebens, Johan; Rao, K Ranga
2008-01-01
Background Relatively few studies have examined the association between air pollution and stroke mortality. Inconsistent and inclusive results from existing studies on air pollution and stroke justify the need to continue to investigate the linkage between stroke and air pollution. No studies have been done to investigate the association between stroke and greenness. The objective of this study was to examine if there is association of stroke with air pollution, income and greenness in northwest Florida. Results Our study used an ecological geographical approach and dasymetric mapping technique. We adopted a Bayesian hierarchical model with a convolution prior considering five census tract specific covariates. A 95% credible set which defines an interval having a 0.95 posterior probability of containing the parameter for each covariate was calculated from Markov Chain Monte Carlo simulations. The 95% credible sets are (-0.286, -0.097) for household income, (0.034, 0.144) for traffic air pollution effect, (0.419, 1.495) for emission density of monitored point source polluters, (0.413, 1.522) for simple point density of point source polluters without emission data, and (-0.289,-0.031) for greenness. Household income and greenness show negative effects (the posterior densities primarily cover negative values). Air pollution covariates have positive effects (the 95% credible sets cover positive values). Conclusion High risk of stroke mortality was found in areas with low income level, high air pollution level, and low level of exposure to green space. PMID:18452609
Downregulation of the posterior medial frontal cortex prevents social conformity.
Klucharev, Vasily; Munneke, Moniek A M; Smidts, Ale; Fernández, Guillén
2011-08-17
We often change our behavior to conform to real or imagined group pressure. Social influence on our behavior has been extensively studied in social psychology, but its neural mechanisms have remained largely unknown. Here we demonstrate that the transient downregulation of the posterior medial frontal cortex by theta-burst transcranial magnetic stimulation reduces conformity, as indicated by reduced conformal adjustments in line with group opinion. Both the extent and probability of conformal behavioral adjustments decreased significantly relative to a sham and a control stimulation over another brain area. The posterior part of the medial frontal cortex has previously been implicated in behavioral and attitudinal adjustments. Here, we provide the first interventional evidence of its critical role in social influence on human behavior.
Spectral likelihood expansions for Bayesian inference
NASA Astrophysics Data System (ADS)
Nagel, Joseph B.; Sudret, Bruno
2016-03-01
A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.
Phan, Thanh G; Fong, Ashley C; Donnan, Geoffrey; Reutens, David C
2007-06-01
Knowledge of the extent and distribution of infarcts of the posterior cerebral artery (PCA) may give insight into the limits of the arterial territory and infarct mechanism. We describe the creation of a digital atlas of PCA infarcts associated with PCA branch and trunk occlusion by magnetic resonance imaging techniques. Infarcts were manually segmented on T(2)-weighted magnetic resonance images obtained >24 hours after stroke onset. The images were linearly registered into a common stereotaxic coordinate space. The segmented images were averaged to yield the probability of involvement by infarction at each voxel. Comparisons were made with existing maps of the PCA territory. Thirty patients with a median age of 61 years (range, 22 to 86 years) were studied. In the digital atlas of the PCA, the highest frequency of infarction was within the medial temporal lobe and lingual gyrus (probability=0.60 to 0.70). The mean and maximal PCA infarct volumes were 55.1 and 128.9 cm(3), respectively. Comparison with published maps showed greater agreement in the anterior and medial boundaries of the PCA territory compared with its posterior and lateral boundaries. We have created a probabilistic digital atlas of the PCA based on subacute magnetic resonance scans. This approach is useful for establishing the spatial distribution of strokes in a given cerebral arterial territory and determining the regions within the arterial territory that are at greatest risk of infarction.
Kim, Kyung Hwan; Park, Min Jung; Lim, Joon Seok; Kim, Nam Kyu; Min, Byung Soh; Ahn, Joong Bae; Kim, Tae Il; Kim, Ho Geun; Koom, Woong Sub
2016-04-01
To identify patients who are at a higher risk of pathologic circumferential resection margin involvement using preoperative magnetic resonance imaging. Between October 2008 and November 2012, 165 patients with locally advanced rectal cancer (cT4 or cT3 with <2 mm distance from tumour to mesorectal fascia) who received preoperative chemoradiotherapy were analysed. The morphologic patterns on post-chemoradiotherapy magnetic resonance imaging were categorized into five patterns from Pattern A (most-likely negative pathologic circumferential resection margin) to Pattern E (most-likely positive pathologic circumferential resection margin). In addition, the location of mesorectal fascia involvement was classified as lateral, posterior and anterior. The diagnostic accuracy of the morphologic criteria was calculated using receiver operating characteristic curve analysis. Pathologic circumferential resection margin involvement was identified in 17 patients (10.3%). The diagnostic accuracy of predicting pathologic circumferential resection margin involvement was 0.73 using the five-scale magnetic resonance imaging pattern. The sensitivity, specificity, positive predictive value and negative predictive value for predicting pathologic circumferential resection margin involvement were 76.5, 65.5, 20.3 and 96.0%, respectively, when cut-off was set between Patterns C and D. On multivariate logistic regression, the magnetic resonance imaging patterns D and E (P= 0.005) and posterior or lateral mesorectal fascia involvement (P= 0.017) were independently associated with increased probability of pathologic circumferential resection margin involvement. The rate of pathologic circumferential resection margin involvement was 30.0% when the patient had Pattern D or E with posterior or lateral mesorectal fascia involvement. Patients who are at a higher risk of pathologic circumferential resection margin involvement can be identified using preoperative magnetic resonance imaging although the predictability is moderate. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Responsiveness of the VISA-P scale for patellar tendinopathy in athletes.
Hernandez-Sanchez, Sergio; Hidalgo, Ma Dolores; Gomez, Antonia
2014-03-01
Patient-reported outcome measures are increasingly used in sports medicine to assess results after treatment, but interpretability of change for many instruments remains unclear. To define the minimum clinically important difference (MCID) for the Victorian Institute of Sport Assessment scale (VISA-P) in athletes with patellar tendinopathy (PT) who underwent conservative treatment. Ninety-eight athletes with PT were enrolled in the study. Each participant completed the VISA-P at admission, after 1 week, and at the final visit. Athletes also assessed their clinical change at discharge on a 15-point Likert scale. We equated important change with a score of ≥3 (somewhat better). Receiver-operating characteristic (ROC) curve analysis and mean change score were used to determine MCID. Minimal detectable change was calculated. The effect of baseline scores on MCID and different criteria used to define important change were investigated. A Bayesian analysis was used to establish the posterior probability of reporting clinical changes related to MCID value. Athletes with PT who showed an absolute change greater than 13 points in the VISA-P score or 15.4-27% of relative change achieved a minimal important change in their clinical status. This value depended on baseline scores. The probability of a clinical change in a patient was 98% when this threshold was achieved and 45% when MCID was not achieved. Definition of the MCID will enhance the interpretability of changes in the VISA-P score in the athletes with PT, but caution is required when these values are used.
Schubert, Michael; Yu, Jr-Kai; Holland, Nicholas D; Escriva, Hector; Laudet, Vincent; Holland, Linda Z
2005-01-01
In the invertebrate chordate amphioxus, as in vertebrates, retinoic acid (RA) specifies position along the anterior/posterior axis with elevated RA signaling in the middle third of the endoderm setting the posterior limit of the pharynx. Here we show that AmphiHox1 is also expressed in the middle third of the developing amphioxus endoderm and is activated by RA signaling. Knockdown of AmphiHox1 function with an antisense morpholino oligonucleotide shows that AmphiHox1 mediates the role of RA signaling in setting the posterior limit of the pharynx by repressing expression of pharyngeal markers in the posterior foregut/midgut endoderm. The spatiotemporal expression of these endodermal genes in embryos treated with RA or the RA antagonist BMS009 indicates that Pax1/9, Pitx and Notch are probably more upstream than Otx and Nodal in the hierarchy of genes repressed by RA signaling. This work highlights the potential of amphioxus, a genomically simple, vertebrate-like invertebrate chordate, as a paradigm for understanding gene hierarchies similar to the more complex ones of vertebrates.
Turgut, Burak; Türkçüoğlu, Peykan; Deniz, Nurettin; Catak, Onur
2008-12-01
To report annular and central heavy pigment deposition on the posterior lens capsule in a case of pigment dispersion syndrome. Case report. A 36-year-old female with bilateral pigment dispersion syndrome presented with progressive decrease in visual acuity in the right eye over the past 1-2 years. Clinical examination revealed the typical findings of pigment dispersion syndrome including bilateral Krunkenberg spindles, iris transillumination defects, and dense trabecular meshwork pigmentation. Remarkably, annular and central dense pigmentation of the posterior lens capsule was noted in the right eye. Annular pigment deposition on the posterior lens capsule may be a rare finding associated with pigment dispersion syndrome. Such a finding suggests that there may be aqueous flow into the retrolental space in some patients with this condition. The way of central pigmentation is the entrance of aqueous to Berger's space. In our case, it is probable that spontaneous detachment of the anterior hyaloid membrane aided this entrance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan
2016-07-04
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...
2016-06-01
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
NASA Astrophysics Data System (ADS)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura
2016-07-01
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.
Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model
Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos
2011-01-01
This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070
Joint segmentation and deformable registration of brain scans guided by a tumor growth model.
Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos
2011-01-01
This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.
Empty sella syndrome secondary to intrasellar cyst in adolescence.
Raiti, S; Albrink, M J; Maclaren, N K; Chadduck, W M; Gabriele, O F; Chou, S M
1976-09-01
A 15-year-old boy had growth failure and failure of sexual development. The probable onset was at age 10. Endocrine studies showed hypopituitarism with deficiency of growth hormone and follicle-stimulating hormone, an abnormal response to metyrapone, and deficiency of thyroid function. Luteinizing hormone level was in the low-normal range. Posterior pituitary function was normal. Roentgenogram showed a large sella with some destruction of the posterior clinoids. Transsphenoidal exploration was carried out. The sella was empty except for a whitish membrane; no pituitary tissue was seen. The sella was packed with muscle. Recovery was uneventful, and the patient was given replacement therapy. On histologic examination,the cyst wall showed low pseudostratified cuboidal epithelium and occasional squamous metaplasia. Hemosiderin-filled phagocytes and acinar structures were also seen. The diagnosis was probable rupture of an intrasellar epithelial cyst, leading to empty sella syndrome.
Log-Linear Models for Gene Association
Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.
2009-01-01
We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032
NASA Astrophysics Data System (ADS)
De Lannoy, G. J.; Reichle, R. H.; Vrugt, J. A.
2012-12-01
Simulated L-band (1.4 GHz) brightness temperatures are very sensitive to the values of the parameters in the radiative transfer model (RTM). We assess the optimum RTM parameter values and their (posterior) uncertainty in the Goddard Earth Observing System (GEOS-5) land surface model using observations of multi-angular brightness temperature over North America from the Soil Moisture Ocean Salinity (SMOS) mission. Two different parameter estimation methods are being compared: (i) a particle swarm optimization (PSO) approach, and (ii) an MCMC simulation procedure using the differential evolution adaptive Metropolis (DREAM) algorithm. Our results demonstrate that both methods provide similar "optimal" parameter values. Yet, DREAM exhibits better convergence properties, resulting in a reduced spread of the posterior ensemble. The posterior parameter distributions derived with both methods are used for predictive uncertainty estimation of brightness temperature. This presentation will highlight our model-data synthesis framework and summarize our initial findings.
Semiparametric Bayesian classification with longitudinal markers
De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter
2013-01-01
Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871
Articular Eminence Inclination in Medieval and Contemporary Croatian Population
Kranjčić, Josip; Šlaus, Mario; Vodanović, Marin; Peršić, Sanja; Vojvodić, Denis
2016-12-01
Articular eminence inclination (AEI) of the temporomandibular joint leads the mandible in its movements. Therefore, the aim of the present study was to determine AEI values in medieval (MP) and recent (RP) Croatian population. The study was carried out on two groups of specimens: first group with 30 MP human dry skulls, while the other, serving as control group consisted of 137 dry skulls. The AEI was measured on lateral digital skull images as the angle between the best fi t line drawn along the posterior wall of the articular eminence and the Frankfurt horizontal plane. No statistically significant (p>0.05) differences between the left and right side AEI were found between MP skulls and RP skulls. The mean value of MP AEI was 45.5˚, with a range of 20.9˚-64˚. The mean RP AEI value was steeper (61.99˚), with a range of 30˚-94˚. Difference between the mean MP and RP AEI values was statistically significant (p<0.05). Values of AEI vary a lot. Nonsignificant differences between the left and right side AEI confirmed the natural left-right side asymmetry. The values of AEI differ between the RP and MP groups, most probably due to different type of food consumption in medieval time, and consequently different masticatory loads and forces.
Lovelock, Paul K; Spurdle, Amanda B; Mok, Myth TS; Farrugia, Daniel J; Lakhani, Sunil R; Healey, Sue; Arnold, Stephen; Buchanan, Daniel; Investigators, kConFab; Couch, Fergus J; Henderson, Beric R; Goldgar, David E; Tavtigian, Sean V; Chenevix-Trench, Georgia; Brown, Melissa A
2007-01-01
Introduction Many of the DNA sequence variants identified in the breast cancer susceptibility gene BRCA1 remain unclassified in terms of their potential pathogenicity. Both multifactorial likelihood analysis and functional approaches have been proposed as a means to elucidate likely clinical significance of such variants, but analysis of the comparative value of these methods for classifying all sequence variants has been limited. Methods We have compared the results from multifactorial likelihood analysis with those from several functional analyses for the four BRCA1 sequence variants A1708E, G1738R, R1699Q, and A1708V. Results Our results show that multifactorial likelihood analysis, which incorporates sequence conservation, co-inheritance, segregation, and tumour immunohistochemical analysis, may improve classification of variants. For A1708E, previously shown to be functionally compromised, analysis of oestrogen receptor, cytokeratin 5/6, and cytokeratin 14 tumour expression data significantly strengthened the prediction of pathogenicity, giving a posterior probability of pathogenicity of 99%. For G1738R, shown to be functionally defective in this study, immunohistochemistry analysis confirmed previous findings of inconsistent 'BRCA1-like' phenotypes for the two tumours studied, and the posterior probability for this variant was 96%. The posterior probabilities of R1699Q and A1708V were 54% and 69%, respectively, only moderately suggestive of increased risk. Interestingly, results from functional analyses suggest that both of these variants have only partial functional activity. R1699Q was defective in foci formation in response to DNA damage and displayed intermediate transcriptional transactivation activity but showed no evidence for centrosome amplification. In contrast, A1708V displayed an intermediate transcriptional transactivation activity and a normal foci formation response in response to DNA damage but induced centrosome amplification. Conclusion These data highlight the need for a range of functional studies to be performed in order to identify variants with partially compromised function. The results also raise the possibility that A1708V and R1699Q may be associated with a low or moderate risk of cancer. While data pooling strategies may provide more information for multifactorial analysis to improve the interpretation of the clinical significance of these variants, it is likely that the development of current multifactorial likelihood approaches and the consideration of alternative statistical approaches will be needed to determine whether these individually rare variants do confer a low or moderate risk of breast cancer. PMID:18036263
Suzuki, Teppei; Tani, Yuji; Ogasawara, Katsuhiko
2016-07-25
Consistent with the "attention, interest, desire, memory, action" (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. In the case of the keyword "clinic name," the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the keyword "clinic name and regional name," the probability for a repeated visit to the website and the mammography screening page was negative. In the case of the keyword "clinic name + medical examination," the visit probability to the website was positive, and the visit probability to the information page was negative. When visitors referred to the keywords "mammography screening," the visit probability to the mammography screening page was positive (95% highest posterior density interval = 3.38-26.66). Further analysis for not only the clinic website but also various other medical institution websites is necessary to build a general inspection model for medical institution websites; we want to consider this in future research. Additionally, we hope to use the results obtained in this study as a prior distribution for future work to conduct higher-precision analysis.
Tani, Yuji
2016-01-01
Background Consistent with the “attention, interest, desire, memory, action” (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. Objective We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. Methods We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. Results In the case of the keyword “clinic name,” the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the keyword “clinic name and regional name,” the probability for a repeated visit to the website and the mammography screening page was negative. In the case of the keyword “clinic name + medical examination,” the visit probability to the website was positive, and the visit probability to the information page was negative. When visitors referred to the keywords “mammography screening,” the visit probability to the mammography screening page was positive (95% highest posterior density interval = 3.38-26.66). Conclusions Further analysis for not only the clinic website but also various other medical institution websites is necessary to build a general inspection model for medical institution websites; we want to consider this in future research. Additionally, we hope to use the results obtained in this study as a prior distribution for future work to conduct higher-precision analysis. PMID:27457537
Significance testing - are we ready yet to abandon its use?
The, Bertram
2011-11-01
Understanding of the damaging effects of significance testing has steadily grown. Reporting p values without dichotomizing the result to be significant or not, is not the solution. Confidence intervals are better, but are troubled by a non-intuitive interpretation, and are often misused just to see whether the null value lies within the interval. Bayesian statistics provide an alternative which solves most of these problems. Although criticized for relying on subjective models, the interpretation of a Bayesian posterior probability is more intuitive than the interpretation of a p value, and seems to be closest to intuitive patterns of human decision making. Another alternative could be using confidence interval functions (or p value functions) to display a continuum of intervals at different levels of confidence around a point estimate. Thus, better alternatives to significance testing exist. The reluctance to abandon this practice might be both preference of clinging to old habits as well as the unfamiliarity with better methods. Authors might question if using less commonly exercised, though superior, techniques will be well received by the editors, reviewers and the readership. A joint effort will be needed to abandon significance testing in clinical research in the future.
Fisher, Charles K; Mehta, Pankaj
2015-06-01
Feature selection, identifying a subset of variables that are relevant for predicting a response, is an important and challenging component of many methods in statistics and machine learning. Feature selection is especially difficult and computationally intensive when the number of variables approaches or exceeds the number of samples, as is often the case for many genomic datasets. Here, we introduce a new approach--the Bayesian Ising Approximation (BIA)-to rapidly calculate posterior probabilities for feature relevance in L2 penalized linear regression. In the regime where the regression problem is strongly regularized by the prior, we show that computing the marginal posterior probabilities for features is equivalent to computing the magnetizations of an Ising model with weak couplings. Using a mean field approximation, we show it is possible to rapidly compute the feature selection path described by the posterior probabilities as a function of the L2 penalty. We present simulations and analytical results illustrating the accuracy of the BIA on some simple regression problems. Finally, we demonstrate the applicability of the BIA to high-dimensional regression by analyzing a gene expression dataset with nearly 30 000 features. These results also highlight the impact of correlations between features on Bayesian feature selection. An implementation of the BIA in C++, along with data for reproducing our gene expression analyses, are freely available at http://physics.bu.edu/∼pankajm/BIACode. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
McLachlan, G J; Bean, R W; Jones, L Ben-Tovim
2006-07-01
An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
Bayesian functional integral method for inferring continuous data from discrete measurements.
Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul
2012-02-08
Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Sea Ice Concentration Estimation Using Active and Passive Remote Sensing Data Fusion
NASA Astrophysics Data System (ADS)
Zhang, Y.; Li, F.; Zhang, S.; Zhu, T.
2017-12-01
In this abstract, a decision-level fusion method by utilizing SAR and passive microwave remote sensing data for sea ice concentration estimation is investigated. Sea ice concentration product from passive microwave concentration retrieval methods has large uncertainty within thin ice zone. Passive microwave data including SSM/I, AMSR-E, and AMSR-2 provide daily and long time series observations covering whole polar sea ice scene, and SAR images provide rich sea ice details with high spatial resolution including deformation and polarimetric features. In the proposed method, the merits from passive microwave data and SAR data are considered. Sea ice concentration products from ASI and sea ice category label derived from CRF framework in SAR imagery are calibrated under least distance protocol. For SAR imagery, incident angle and azimuth angle were used to correct backscattering values from slant range to ground range in order to improve geocoding accuracy. The posterior probability distribution between category label from SAR imagery and passive microwave sea ice concentration product is modeled and integrated under Bayesian network, where Gaussian statistical distribution from ASI sea ice concentration products serves as the prior term, which represented as an uncertainty of sea ice concentration. Empirical model based likelihood term is constructed under Bernoulli theory, which meets the non-negative and monotonically increasing conditions. In the posterior probability estimation procedure, final sea ice concentration is obtained using MAP criterion, which equals to minimize the cost function and it can be calculated with nonlinear iteration method. The proposed algorithm is tested on multiple satellite SAR data sets including GF-3, Sentinel-1A, RADARSAT-2 and Envisat ASAR. Results show that the proposed algorithm can improve the accuracy of ASI sea ice concentration products and reduce the uncertainty along the ice edge.
Identification of transmissivity fields using a Bayesian strategy and perturbative approach
NASA Astrophysics Data System (ADS)
Zanini, Andrea; Tanda, Maria Giovanna; Woodbury, Allan D.
2017-10-01
The paper deals with the crucial problem of the groundwater parameter estimation that is the basis for efficient modeling and reclamation activities. A hierarchical Bayesian approach is developed: it uses the Akaike's Bayesian Information Criteria in order to estimate the hyperparameters (related to the covariance model chosen) and to quantify the unknown noise variance. The transmissivity identification proceeds in two steps: the first, called empirical Bayesian interpolation, uses Y* (Y = lnT) observations to interpolate Y values on a specified grid; the second, called empirical Bayesian update, improve the previous Y estimate through the addition of hydraulic head observations. The relationship between the head and the lnT has been linearized through a perturbative solution of the flow equation. In order to test the proposed approach, synthetic aquifers from literature have been considered. The aquifers in question contain a variety of boundary conditions (both Dirichelet and Neuman type) and scales of heterogeneities (σY2 = 1.0 and σY2 = 5.3). The estimated transmissivity fields were compared to the true one. The joint use of Y* and head measurements improves the estimation of Y considering both degrees of heterogeneity. Even if the variance of the strong transmissivity field can be considered high for the application of the perturbative approach, the results show the same order of approximation of the non-linear methods proposed in literature. The procedure allows to compute the posterior probability distribution of the target quantities and to quantify the uncertainty in the model prediction. Bayesian updating has advantages related both to the Monte-Carlo (MC) and non-MC approaches. In fact, as the MC methods, Bayesian updating allows computing the direct posterior probability distribution of the target quantities and as non-MC methods it has computational times in the order of seconds.
Menon, Mani; Dalela, Deepansh; Jamil, Marcus; Diaz, Mireya; Tallman, Christopher; Abdollah, Firas; Sood, Akshay; Lehtola, Linda; Miller, David; Jeong, Wooju
2018-05-01
We report a 1-year update of functional urinary and sexual recovery, oncologic outcomes and postoperative complications in patients who completed a randomized controlled trial comparing posterior (Retzius sparing) with anterior robot-assisted radical prostatectomy. A total of 120 patients with clinically low-intermediate risk prostate cancer were randomized to undergo robot-assisted radical prostatectomy via the posterior and anterior approach in 60 each. Surgery was performed by a single surgical team at an academic institution. An independent third party ascertained urinary and sexual function outcomes preoperatively, and 3, 6 and 12 months after surgery. Oncologic outcomes consisted of positive surgical margins and biochemical recurrence-free survival. Biochemical recurrence was defined as 2 postoperative prostate specific antigen values of 0.2 ng/ml or greater. Median age of the cohort was 61 years and median followup was 12 months. At 12 months in the anterior vs posterior prostatectomy groups there were no statistically significant differences in the urinary continence rate (0 to 1 security pad per day in 93.3% vs 98.3%, p = 0.09), 24-hour pad weight (median 12 vs 7.5 gm, p = 0.3), erection sufficient for intercourse (69.2% vs 86.5%) or postoperative Sexual Health Inventory for Men score 17 or greater (44.6% vs 44.1%). In the posterior vs anterior prostatectomy groups a nonfocal positive surgical margin was found in 11.7% vs 8.3%, biochemical recurrence-free survival probability was 0.84 vs 0.93 and postoperative complications developed in 18.3% vs 11.7%. Among patients with clinically low-intermediate risk prostate cancer randomized to anterior (Menon) or posterior (Bocciardi) approach robot-assisted radical prostatectomy the differences in urinary continence seen at 3 months were muted at the 12-month followup. Sexual function recovery, postoperative complication and biochemical recurrence rates were comparable 1 year postoperatively. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Håberg, Asta Kristine; Skandsen, Toril; Finnanger, Torun Gangaune; Vik, Anne
2014-01-01
Abstract The objective of this study was to explore the evolution of apparent diffusion coefficient (ADC) values in magnetic resonance imaging (MRI) in normal-appearing tissue of the corpus callosum during the 1st year after traumatic brain injury (TBI), and relate findings to outcome. Fifty-seven patients (mean age 34 [range 11–63] years) with moderate to severe TBI were examined with diffusion weighted MRI at three time points (median 7 days, 3 and 12 months), and a sex- and age-matched control group of 47 healthy individuals, were examined once. The corpus callosum was subdivided and the mean ADC values computed blinded in 10 regions of interests without any visible lesions in the ADC map. Outcome measures were Glasgow Outcome Scale Extended (GOSE) and neuropsychological domain scores at 12 months. We found a gradual increase of the mean ADC values during the 12 month follow-up, most evident in the posterior truncus (r=0.19, p<0.001). Compared with the healthy control group, we found higher mean ADC values in posterior truncus both at 3 months (p=0.021) and 12 months (p=0.003) post-injury. Patients with fluid-attenuated inversion recovery (FLAIR) lesions in the corpus callosum in the early MRI, and patients with disability (GOSE score ≤6) showed evidence of increased mean ADC values in the genu and posterior truncus at 12 months. Mean ADC values in posterior parts of the corpus callosum at 3 months predicted the sensory-motor function domain score (p=0.010–0.028). During the 1st year after moderate and severe TBI, we demonstrated a slowly evolving disruption of the microstructure in normal appearing corpus callosum in the ADC map, most evident in the posterior truncus. The mean ADC values were associated with both outcome and ability to perform speeded, complex sensory-motor action. PMID:23837731
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
Yoshihara, Hiroyuki
2014-07-01
Numerous surgical procedures and instrumentation techniques for lumbosacral fusion (LSF) have been developed. This is probably because of its high mechanical demand and unique anatomy. Surgical options include anterior column support (ACS) and posterior stabilization procedures. Biomechanical studies have been performed to verify the stability of those options. The options have their own advantage but also disadvantage aspects. This review article reports the surgical options for lumbosacral fusion, their biomechanical stability, advantages/disadvantages, and affecting factors in option selection. Review of literature. LSF has lots of options both for ACS and posterior stabilization procedures. Combination of posterior stabilization procedures is an option. Furthermore, combinations of ACS and posterior stabilization procedures are other options. It is difficult to make a recommendation or treatment algorithm of LSF from the current literature. However, it is important to know all aspects of the options and decision-making of surgical options for LSF needs to be tailored for each patient, considering factors such as biomechanical stress and osteoporosis.
WE-H-207A-06: Hypoxia Quantification in Static PET Images: The Signal in the Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, H; Yeung, I; Milosevic, M
2016-06-15
Purpose: Quantification of hypoxia from PET images is of considerable clinical interest. In the absence of dynamic PET imaging the hypoxic fraction (HF) of a tumor has to be estimated from voxel values of activity concentration of a radioactive hypoxia tracer. This work is part of an effort to standardize quantification of tumor hypoxic fraction from PET images. Methods: A simple hypoxia imaging model in the tumor was developed. The distribution of the tracer activity was described as the sum of two different probability distributions, one for the normoxic (and necrotic), the other for the hypoxic voxels. The widths ofmore » the distributions arise due to variability of the transport, tumor tissue inhomogeneity, tracer binding kinetics, and due to PET image noise. Quantification of HF was performed for various levels of variability using two different methodologies: a) classification thresholds between normoxic and hypoxic voxels based on a non-hypoxic surrogate (muscle), and b) estimation of the (posterior) probability distributions based on maximizing likelihood optimization that does not require a surrogate. Data from the hypoxia imaging model and from 27 cervical cancer patients enrolled in a FAZA PET study were analyzed. Results: In the model, where the true value of HF is known, thresholds usually underestimate the value for large variability. For the patients, a significant uncertainty of the HF values (an average intra-patient range of 17%) was caused by spatial non-uniformity of image noise which is a hallmark of all PET images. Maximum likelihood estimation (MLE) is able to directly optimize for the weights of both distributions, however, may suffer from poor optimization convergence. For some patients, MLE-based HF values showed significant differences to threshold-based HF-values. Conclusion: HF-values depend critically on the magnitude of the different sources of tracer uptake variability. A measure of confidence should also be reported.« less
Poster error probability in the Mu-11 Sequential Ranging System
NASA Technical Reports Server (NTRS)
Coyle, C. W.
1981-01-01
An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.
Al-Ali, Firas; Barrow, Tom; Duan, Li; Jefferson, Anne; Louis, Susan; Luke, Kim; Major, Kevin; Smoker, Sandy; Walker, Sarah; Yacobozzi, Margaret
2011-09-01
Although atherosclerotic plaque in the carotid and coronary arteries is accepted as a cause of ischemia, vertebral artery ostium (VAO) atherosclerotic plaque is not widely recognized as a source of ischemic stroke. We seek to demonstrate its implication in some posterior circulation ischemia. This is a nonrandomized, prospective, single-center registry on consecutive patients presenting with posterior circulation ischemia who underwent VAO stenting for significant atherosclerotic stenosis. Diagnostic evaluation and imaging studies determined the likelihood of this lesion as the symptom source (highly likely, probable, or highly unlikely). Patients were divided into 4 groups in decreasing order of severity of clinical presentation (ischemic stroke, TIA then stroke, TIA, asymptomatic), which were compared with the morphological and hemodynamic characteristics of the VAO plaque. Clinical follow-up 1 year after stenting assessed symptom recurrence. One hundred fourteen patients underwent stenting of 127 lesions; 35% of the lesions were highly likely the source of symptoms, 53% were probable, and 12% were highly unlikely. Clinical presentation correlated directly with plaque irregularity and presence of clot at the VAO, as did bilateral lesions and presence of tandem lesions. Symptom recurrence at 1 year was 2%. Thirty-five percent of the lesions were highly likely the source of the symptoms. A direct relationship between some morphological/hemodynamic characteristics and the severity of clinical presentation was also found. Finally, patients had a very low rate of symptom recurrence after treatment. These 3 observations point strongly to VAO plaque as a potential source of some posterior circulation stroke.
Effect of posterior cruciate ligament rupture on the radial displacement of lateral meniscus.
Lei, Pengfei; Sun, Rongxin; Hu, Yihe; Li, Kanghua; Liao, Zhan
2015-06-01
The relationship between lateral meniscus tear and posterior cruciate ligament injury is not well understood. The present study aims to investigate and assess the effect of posterior cruciate ligament rupture on lateral meniscus radial displacement at different flexion angles under static loading conditions. Twelve fresh human cadaveric knee specimens were divided into four groups such as posterior cruciate ligament intact, anterolateral band rupture, posteromedial band rupture and posterior cruciate ligament complete rupture groups, according to the purpose and order of testing. Radial displacement of lateral meniscus was measured under different loads (200-1000N) at 0°, 30°, 60°, and 90° of knee flexion. Compared with posterior cruciate ligament intact group, the displacement values of lateral meniscus in anterolateral band rupture group increased at 0° flexion with 600N, 800N, and 1000N and at 30°, 60° and 90° flexion under all loading conditions. Posteromedial band rupture group exhibited higher displacement at 0° flexion under all loading conditions, at 30° and 60° flexion with 600, 800N and 1000N, and at 90° flexion with 400N, 600N, 800N, and 1000N than the posterior cruciate ligament intact group. The posterior cruciate ligament complete rupture group had a higher displacement value of lateral medial meniscus at 0°, 30°, 60° and 90° flexion under all loading conditions, as compared to the posterior cruciate ligament intact group. The study concludes that partial and complete rupture of the posterior cruciate ligament can trigger the increase of radial displacement on lateral meniscus. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.
ERIC Educational Resources Information Center
Glas, Cees A. W.; Meijer, Rob R.
A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…
Shetty, Gautam M; Mullaji, Arun; Bhayde, Sagar
2012-10-01
This prospective study aimed to evaluate radiographically, change in joint line and femoral condylar offset with the optimized gap balancing technique in computer-assisted, primary, cruciate-substituting total knee arthroplasties (TKAs). One hundred and twenty-nine consecutive computer-assisted TKAs were evaluated radiographically using pre- and postoperative full-length standing hip-to-ankle, antero-posterior and lateral radiographs to assess change in knee deformity, joint line height and posterior condylar offset. In 49% of knees, there was a net decrease (mean 2.2mm, range 0.2-8.4mm) in joint line height postoperatively whereas 46.5% of knees had a net increase in joint line height (mean 2.5mm, range 0.2-11.2mm). In 93% of the knees, joint line was restored to within ± 5 mm of preoperative values. In 53% of knees, there was a net increase (mean 2.9 mm, range 0.2-12 mm) in posterior offset postoperatively whereas 40% of knees had a net decrease in posterior offset (mean 4.2mm, range 0.6-20mm). In 82% of knees, the posterior offset was restored within ± 5 mm of preoperative values. Based on radiographic evaluation in extension and at 30° flexion, the current study clearly demonstrates that joint line and posterior femoral condylar offset can be restored in the majority of computer-assisted, cruciate-substituting TKAs to within 5mm of their preoperative value. The optimized gap balancing feature of the computer software allows the surgeon to simulate the effect of simultaneously adjusting femoral component size, position and distal femoral resection level on joint line and posterior femoral offset. Copyright © 2011 Elsevier B.V. All rights reserved.
Bayesian Parameter Inference and Model Selection by Population Annealing in Systems Biology
Murakami, Yohei
2014-01-01
Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor. PMID:25089832
Marzo, John M; Gurske-DePerio, Jennifer
2009-01-01
Avulsion of the posterior horn attachment of the medial meniscus can compromise load-bearing ability, produce meniscus extrusion, and result in tibiofemoral joint-space narrowing, articular cartilage damage, and osteoarthritis. Avulsion of the posterior horn of the medial meniscus will increase peak contact pressure and decrease contact area in the medial compartment of the knee, and posterior horn repair will restore contact area and peak contact pressures to values of the control knee. Controlled laboratory study. Eight fresh-frozen human cadaveric knees had tibiofemoral peak contact pressures and contact area measured in the control state. The posterior horn of the medial meniscus was avulsed from its insertion and knees were retested. The meniscal avulsion was repaired by suture through a transosseous tunnel and the knees were tested a third time. Avulsion of the posterior horn attachment of the medial meniscus resulted in a significant increase in medial joint peak contact pressure (from 3841 kPa to 5084 kPa) and a significant decrease in contact area (from 594 mm(2) to 474 mm(2)). Repair of the avulsion resulted in restoration of the loading profiles to values equal to the control knee, with values of 3551 kPa for peak pressure and 592 mm(2) for contact area. Posterior horn medial meniscal root avulsion leads to deleterious alteration of the loading profiles of the medial joint compartment and results in loss of hoop stress resistance, meniscus extrusion, abnormal loading of the joint, and early knee medial-compartment degenerative changes. The repair technique described restores the ability of the medial meniscus to absorb hoop stress and eliminate joint-space narrowing, possibly decreasing the risk of degenerative disease.
Altered states of consciousness in epilepsy: a DTI study of the brain.
Xie, Fangfang; Xing, Wu; Wang, Xiaoyi; Liao, Weihua; Shi, Wei
2017-08-01
A disturbance in the level of consciousness is a classical clinical sign of several seizure types. Recent studies have shown that altered states of consciousness in seizures are associated with structural and functional changes of several brain regions. Prominent among these are the thalamus, the brain stem and the default mode network, which is part of the consciousness system. Our study used diffusion tensor imaging (DTI) to evaluate these brain regions in patients with three different types of epilepsies that are associated with altered consciousness: complex partial seizures (CPS), primary generalized tonic-clonic seizures (PGTCS) or secondary generalized tonic-clonic seizures (SGTCS). Additionally, this study further explores the probable mechanisms underlying impairment of consciousness in seizures. Conventional MRI and DTI scanning were performed in 51 patients with epilepsy and 51 healthy volunteers. The epilepsy group was in turn subdivided into three subgroups: CPS, PGTCS or SGTCS. Each subgroup comprised 17 patients. Each subject involved in the study underwent a DTI evaluation of the brain to measure the apparent diffusion coefficient (ADC) and fractional anisotropy (FA) values of nine regions of interest: the postero-superior portion of midbrain, the bilateral dorsal thalamus, the bilateral precuneus/posterior cingulate, the bilateral medial pre-frontal gyri and the bilateral supramarginalgyri. The statistical significance of the measured ADC and FA values between the experimental and control groups was analysed using the paired t-test, and one-way analysis of variance was performed for a comparative analysis between the three subgroups. Statistically significantly higher ADC values ( p < 0.01) were observed in the bilateral dorsal thalamus and postero-superior aspect of the midbrain in the three patient subgroups than in the control group. There were no significant changes in the ADC values ( p > 0.05) in the bilateral precuneus/posterior cingulate, bilateral medial pre-frontal gyri or bilateral supramarginalgyri in the experimental group. Among the three patient subgroups and the ADC values of corresponding brain regions, there were no statistically significant changes. Statistically significantly lower FA values ( p < 0.05) were observed in the bilateral dorsal thalamus of the patients in the three subgroups than in the control group. Significantly lowered FA values from the postero-superior aspect of the mid brain ( p < 0.01) were observed in patients with PGTCS compared with the control group. There were no significant changes in the FA values ( p > 0.05) from the bilateral precuneus/posterior cingulate, bilateral medial frontal gyri or bilateral supramarginalgyri in the experimental group. Among the three patient subgroups and the FA values of the corresponding brain regions, there were no statistically significant changes. In epileptic patients with CPS, PGTCS or SGTCS, there seems to be a long-lasting neuronal dysfunction of the bilateral dorsal thalamus and postero-superior aspect of the midbrain. The thalamus and upper brain stem are likely to play a key role in epileptic patients with impaired consciousness.
Eom, Youngsub; Ryu, Dongok; Kim, Dae Wook; Yang, Seul Ki; Song, Jong Suk; Kim, Sug-Whan; Kim, Hyo Myung
2016-10-01
To evaluate the toric intraocular lens (IOL) calculation considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and effective lens position (ELP). Two thousand samples of corneal parameters with keratometric astigmatism ≥ 1.0 D were obtained using bootstrap methods. The probability distributions for incision-induced keratometric and posterior corneal astigmatisms, as well as ELP were estimated from the literature review. The predicted residual astigmatism error using method D with an IOL add power calculator (IAPC) was compared with those derived using methods A, B, and C through Monte-Carlo simulation. Method A considered the keratometric astigmatism and incision-induced keratometric astigmatism, method B considered posterior corneal astigmatism in addition to the A method, method C considered incision-induced posterior corneal astigmatism in addition to the B method, and method D considered ELP in addition to the C method. To verify the IAPC used in this study, the predicted toric IOL cylinder power and its axis using the IAPC were compared with ray-tracing simulation results. The median magnitude of the predicted residual astigmatism error using method D (0.25 diopters [D]) was smaller than that derived using methods A (0.42 D), B (0.38 D), and C (0.28 D) respectively. Linear regression analysis indicated that the predicted toric IOL cylinder power and its axis had excellent goodness-of-fit between the IAPC and ray-tracing simulation. The IAPC is a simple but accurate method for predicting the toric IOL cylinder power and its axis considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and ELP.
Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model
NASA Astrophysics Data System (ADS)
Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato
2018-02-01
This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.
Exploring the Energy Landscapes of Protein Folding Simulations with Bayesian Computation
Burkoff, Nikolas S.; Várnai, Csilla; Wells, Stephen A.; Wild, David L.
2012-01-01
Nested sampling is a Bayesian sampling technique developed to explore probability distributions localized in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering. In this article, we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Gō-like force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins that are commonly used for testing protein-folding procedures. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high-level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. PMID:22385859
Exploring the energy landscapes of protein folding simulations with Bayesian computation.
Burkoff, Nikolas S; Várnai, Csilla; Wells, Stephen A; Wild, David L
2012-02-22
Nested sampling is a Bayesian sampling technique developed to explore probability distributions localized in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering. In this article, we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Gō-like force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins that are commonly used for testing protein-folding procedures. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high-level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Deep convolutional networks for automated detection of posterior-element fractures on spine CT
NASA Astrophysics Data System (ADS)
Roth, Holger R.; Wang, Yinong; Yao, Jianhua; Lu, Le; Burns, Joseph E.; Summers, Ronald M.
2016-03-01
Injuries of the spine, and its posterior elements in particular, are a common occurrence in trauma patients, with potentially devastating consequences. Computer-aided detection (CADe) could assist in the detection and classification of spine fractures. Furthermore, CAD could help assess the stability and chronicity of fractures, as well as facilitate research into optimization of treatment paradigms. In this work, we apply deep convolutional networks (ConvNets) for the automated detection of posterior element fractures of the spine. First, the vertebra bodies of the spine with its posterior elements are segmented in spine CT using multi-atlas label fusion. Then, edge maps of the posterior elements are computed. These edge maps serve as candidate regions for predicting a set of probabilities for fractures along the image edges using ConvNets in a 2.5D fashion (three orthogonal patches in axial, coronal and sagittal planes). We explore three different methods for training the ConvNet using 2.5D patches along the edge maps of `positive', i.e. fractured posterior-elements and `negative', i.e. non-fractured elements. An experienced radiologist retrospectively marked the location of 55 displaced posterior-element fractures in 18 trauma patients. We randomly split the data into training and testing cases. In testing, we achieve an area-under-the-curve of 0.857. This corresponds to 71% or 81% sensitivities at 5 or 10 false-positives per patient, respectively. Analysis of our set of trauma patients demonstrates the feasibility of detecting posterior-element fractures in spine CT images using computer vision techniques such as deep convolutional networks.
NASA Astrophysics Data System (ADS)
Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei
2013-08-01
develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.
HELP: XID+, the probabilistic de-blender for Herschel SPIRE maps
NASA Astrophysics Data System (ADS)
Hurley, P. D.; Oliver, S.; Betancourt, M.; Clarke, C.; Cowley, W. I.; Duivenvoorden, S.; Farrah, D.; Griffin, M.; Lacey, C.; Le Floc'h, E.; Papadopoulos, A.; Sargent, M.; Scudder, J. M.; Vaccari, M.; Valtchanov, I.; Wang, L.
2017-01-01
We have developed a new prior-based source extraction tool, XID+, to carry out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. XID+ is developed using a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates. In this paper, we discuss the details of XID+ and demonstrate the basic capabilities and performance by running it on simulated SPIRE maps resembling the COSMOS field, and comparing to the current prior-based source extraction tool DESPHOT. Not only we show that XID+ performs better on metrics such as flux accuracy and flux uncertainty accuracy, but we also illustrate how obtaining the posterior probability distribution can help overcome some of the issues inherent with maximum-likelihood-based source extraction routines. We run XID+ on the COSMOS SPIRE maps from Herschel Multi-Tiered Extragalactic Survey using a 24-μm catalogue as a positional prior, and a uniform flux prior ranging from 0.01 to 1000 mJy. We show the marginalized SPIRE colour-colour plot and marginalized contribution to the cosmic infrared background at the SPIRE wavelengths. XID+ is a core tool arising from the Herschel Extragalactic Legacy Project (HELP) and we discuss how additional work within HELP providing prior information on fluxes can and will be utilized. The software is available at https://github.com/H-E-L-P/XID_plus. We also provide the data product for COSMOS. We believe this is the first time that the full posterior probability of galaxy photometry has been provided as a data product.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris
2018-03-01
Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.
A shift in anterior–posterior positional information underlies the fin-to-limb evolution
Onimaru, Koh; Kuraku, Shigehiro; Takagi, Wataru; Hyodo, Susumu; Sharpe, James; Tanaka, Mikiko
2015-01-01
The pectoral fins of ancestral fishes had multiple proximal elements connected to their pectoral girdles. During the fin-to-limb transition, anterior proximal elements were lost and only the most posterior one remained as the humerus. Thus, we hypothesised that an evolutionary alteration occurred in the anterior–posterior (AP) patterning system of limb buds. In this study, we examined the pectoral fin development of catshark (Scyliorhinus canicula) and revealed that the AP positional values in fin buds are shifted more posteriorly than mouse limb buds. Furthermore, examination of Gli3 function and regulation shows that catshark fins lack a specific AP patterning mechanism, which restricts its expression to an anterior domain in tetrapods. Finally, experimental perturbation of AP patterning in catshark fin buds results in an expansion of posterior values and loss of anterior skeletal elements. Together, these results suggest that a key genetic event of the fin-to-limb transformation was alteration of the AP patterning network. DOI: http://dx.doi.org/10.7554/eLife.07048.001 PMID:26283004
Scheimpflug imaged corneal changes on anterior and posterior surfaces after collagen cross-linking
Hassan, Ziad; Modis, Laszlo; Szalai, Eszter; Berta, Andras; Nemeth, Gabor
2014-01-01
AIM To compare the anterior and posterior corneal parameters before and after collagen cross-linking therapy for keratoconus. METHODS Collagen cross-linking was performed in 31 eyes of 31 keratoconus patients (mean age 30.6±8.9y). Prior to treatment and an average 7mo after therapy, Scheimpflug analysis was performed using Pentacam HR. In addition to corneal thickness assessments, corneal radius, elevation, and aberrometric measurements were performed both on anterior and posterior corneal surfaces. Data obtained before and after surgery were statistically analyzed. RESULTS In terms of horizontal and vertical corneal radius, and central corneal thickness no deviations were observed an average 7mo after operation. Corneal higher order aberration showed no difference neither on anterior nor on posterior corneal surfaces. During follow-up period, no significant deviation was detected regarding elevation values obtained by measurement in mm units between the 3.0-8.0 mm-zones. CONCLUSION Corneal stabilization could be observed in terms of anterior and posterior corneal surfaces, elevation and higher order aberration values 7mo after collagen cross-linking therapy for keratoconus. PMID:24790876
NASA Astrophysics Data System (ADS)
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure for the hydrological problem considered. This work was supported, in part, by the U.S. Dept. of Energy under Contract No. DE-AC02-05CH11231
Value of Information spreadsheet
Trainor-Guitton, Whitney
2014-05-12
This spreadsheet represents the information posteriors derived from synthetic data of magnetotellurics (MT). These were used to calculate value of information of MT for geothermal exploration. Information posteriors describe how well MT was able to locate the "throat" of clay caps, which are indicative of hidden geothermal resources. This data is full explained in the peer-reviewed publication: Trainor-Guitton, W., Hoversten, G. M., Ramirez, A., Roberts, J., Júlíusson, E., Key, K., Mellors, R. (Sept-Oct. 2014) The value of spatial information for determining well placement: a geothermal example, Geophysics.
The effect of business improvement districts on the incidence of violent crimes
Golinelli, Daniela; Stokes, Robert J; Bluthenthal, Ricky
2010-01-01
Objective To examine whether business improvement districts (BID) contributed to greater than expected declines in the incidence of violent crimes in affected neighbourhoods. Method A Bayesian hierarchical model was used to assess the changes in the incidence of violent crimes between 1994 and 2005 and the implementation of 30 BID in Los Angeles neighbourhoods. Results The implementation of BID was associated with a 12% reduction in the incidence of robbery (95% posterior probability interval −2 to 24) and an 8% reduction in the total incidence of violent crimes (95% posterior probability interval −5 to 21). The strength of the effect of BID on robbery crimes varied by location. Conclusion These findings indicate that the implementation of BID can reduce the incidence of violent crimes likely to result in injury to individuals. The findings also indicate that the establishment of a BID by itself is not a panacea, and highlight the importance of targeting BID efforts to crime prevention interventions that reduce violence exposure associated with criminal behaviours. PMID:20587814
The effect of business improvement districts on the incidence of violent crimes.
MacDonald, John; Golinelli, Daniela; Stokes, Robert J; Bluthenthal, Ricky
2010-10-01
To examine whether business improvement districts (BID) contributed to greater than expected declines in the incidence of violent crimes in affected neighbourhoods. A Bayesian hierarchical model was used to assess the changes in the incidence of violent crimes between 1994 and 2005 and the implementation of 30 BID in Los Angeles neighbourhoods. The implementation of BID was associated with a 12% reduction in the incidence of robbery (95% posterior probability interval -2 to 24) and an 8% reduction in the total incidence of violent crimes (95% posterior probability interval -5 to 21). The strength of the effect of BID on robbery crimes varied by location. These findings indicate that the implementation of BID can reduce the incidence of violent crimes likely to result in injury to individuals. The findings also indicate that the establishment of a BID by itself is not a panacea, and highlight the importance of targeting BID efforts to crime prevention interventions that reduce violence exposure associated with criminal behaviours.
Evolution of antero‐posterior patterning of the limb: Insights from the chick
2017-01-01
Summary The developing limbs of chicken embryos have served as pioneering models for understanding pattern formation for over a century. The ease with which chick wing and leg buds can be experimentally manipulated, while the embryo is still in the egg, has resulted in the discovery of important developmental organisers, and subsequently, the signals that they produce. Sonic hedgehog (Shh) is produced by mesenchyme cells of the polarizing region at the posterior margin of the limb bud and specifies positional values across the antero‐posterior axis (the axis running from the thumb to the little finger). Detailed experimental embryology has revealed the fundamental parameters required to specify antero‐posterior positional values in response to Shh signaling in chick wing and leg buds. In this review, the evolution of the avian wing and leg will be discussed in the broad context of tetrapod paleontology, and more specifically, ancestral theropod dinosaur paleontology. How the parameters that dictate antero‐posterior patterning could have been modulated to produce the avian wing and leg digit patterns will be considered. Finally, broader speculations will be made regarding what the antero‐posterior patterning of chick limbs can tell us about the evolution of other digit patterns, including those that were found in the limbs of the earliest tetrapods. PMID:28734068
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
McClelland, James L.
2013-01-01
This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered. PMID:23970868
McClelland, James L
2013-01-01
This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.
Progression of Brain Network Alterations in Cerebral Amyloid Angiopathy.
Reijmer, Yael D; Fotiadis, Panagiotis; Riley, Grace A; Xiong, Li; Charidimou, Andreas; Boulouis, Gregoire; Ayres, Alison M; Schwab, Kristin; Rosand, Jonathan; Gurol, M Edip; Viswanathan, Anand; Greenberg, Steven M
2016-10-01
We recently showed that cerebral amyloid angiopathy (CAA) is associated with functionally relevant brain network impairments, in particular affecting posterior white matter connections. Here we examined how these brain network impairments progress over time. Thirty-three patients with probable CAA underwent multimodal brain magnetic resonance imaging at 2 time points (mean follow-up time: 1.3±0.4 years). Brain networks of the hemisphere free of intracerebral hemorrhages were reconstructed using fiber tractography and graph theory. The global efficiency of the network and mean fractional anisotropies of posterior-posterior, frontal-frontal, and posterior-frontal network connections were calculated. Patients with moderate versus severe CAA were defined based on microbleed count, dichotomized at the median (median=35). Global efficiency of the intracerebral hemorrhage-free hemispheric network declined from baseline to follow-up (-0.008±0.003; P=0.029). The decline in global efficiency was most pronounced for patients with severe CAA (group×time interaction P=0.03). The decline in global network efficiency was associated with worse executive functioning (β=0.46; P=0.03). Examination of subgroups of network connections revealed a decline in fractional anisotropies of posterior-posterior connections at both levels of CAA severity (-0.006±0.002; P=0.017; group×time interaction P=0.16). The fractional anisotropies of posterior-frontal and frontal-frontal connections declined in patients with severe but not moderate CAA (group×time interaction P=0.007 and P=0.005). Associations were independent of change in white matter hyperintensity volume. Brain network impairment in patients with CAA worsens measurably over just 1.3-year follow-up and seem to progress from posterior to frontal connections with increasing disease severity. © 2016 American Heart Association, Inc.
Flores-Gutiérrez, Enrique O; Díaz, José-Luis; Barrios, Fernando A; Guevara, Miguel Angel; Del Río-Portilla, Yolanda; Corsi-Cabrera, María; Del Flores-Gutiérrez, Enrique O
2009-01-01
Potential sex differences in EEG coherent activity during pleasant and unpleasant musical emotions were investigated. Musical excerpts by Mahler, Bach, and Prodromidès were played to seven men and seven women and their subjective emotions were evaluated in relation to alpha band intracortical coherence. Different brain links in specific frequencies were associated to pleasant and unpleasant emotions. Pleasant emotions (Mahler, Bach) increased upper alpha couplings linking left anterior and posterior regions. Unpleasant emotions (Prodromidès) were sustained by posterior midline coherence exclusively in the right hemisphere in men and bilateral in women. Combined music induced bilateral oscillations among posterior sensory and predominantly left association areas in women. Consistent with their greater positive attributions to music, the coherent network is larger in women, both for musical emotion and for unspecific musical effects. Musical emotion entails specific coupling among cortical regions and involves coherent upper alpha activity between posterior association areas and frontal regions probably mediating emotional and perceptual integration. Linked regions by combined music suggest more working memory contribution in women and attention in men.
[Diagnostic value of MRI for posterior root tear of medial and lateral meniscus].
Qian, Yue-Nan; Liu, Fang; Dong, Yi-Long; Cai, Chun-Yuan
2018-03-25
To explore diagnostic value of MRI on posterior root tear of medial and lateral meniscus. From January 2012 to January 2016, clinical data of 43 patients with meniscal posterior root tear confirmed by arthroscopy were retrospective analyzed, including 25 males and 18 females, aged from 27 to 69 years old with an average age of(42.5±8.3)years old;27 cases on the right side and 16 cases on the left side. MRI examinations of 43 patients with tear of posterior meniscus root confirmed by knee arthroscopies were retrospectively reviewed. MRI images were double-blinded, independently, retrospectively scored by two imaging physicians. Sensitivity, specificity and accuracy of MRI diagnosis of lateral and medial meniscus posterior root tear were calculated, and knee ligament injury and meniscal dislocation were calculated. Forty-three of 143 patients were diagnosed with meniscus posterior root tears by arthroscopy, including 19 patients with lateral tears and 24 patients with medial tears. The sensitivity, specificity and accuracy in diagnosis of posterior medial meniscus root tears for doctor A were 91.67%, 86.6% and 83.9% respectively, and for doctor B were 87.5%, 87.4% and 87.4%, 19 patients with medial meniscal protrusion and 2 patients with anterior cruciate ligament tear. The sensitivity, specificity and accuracy in diagnosis of posterior lateral meniscus root tears for doctor A were 73.7%, 79.9% and 79% respectively, and for doctor B were 78.9%, 82.3% and 82.5%, 4 patients with lateral meniscus herniation and 16 patients with cruciate ligament tear. Kappa statistics for posterior medial meniscus root tears and posterior lateral meniscus root tears were 0.84 and 0.72. MRI could effectively demonstrate imaging features of medial and lateral meniscal root tear and its accompanying signs. It could provide the basis for preoperative diagnosis of clinicians, and be worthy to be popularized. Copyright© 2018 by the China Journal of Orthopaedics and Traumatology Press.
Keratoconus: The ABCD Grading System.
Belin, M W; Duncan, J K
2016-06-01
To propose a new keratoconus classification/staging system that utilises current tomographic data and better reflects the anatomical and functional changes seen in keratoconus. A previously published normative database was reanalysed to generate both anterior and posterior average radii of curvature (ARC and PRC) taken from a 3.0 mm optical zone centred on the thinnest point of the cornea. Mean and standard deviations were recorded and anterior data were compared to the existing Amsler-Krumeich (AK) Classification. ARC, PRC, thinnest pachymetry and distance visual acuity were then used to construct a keratoconus classification. 672 eyes of 336 patients were analysed. Anterior and posterior values were 7.65 ± 0.236 mm and 6.26 ± 0.214 mm, respectively, and thinnest pachymetry values were 534.2 ± 30.36 µm. The ARC values were 2.63, 5.47 and 6.44 standard deviations from the mean values of stages 1-3 in the AK classification, respectively. PRC staging uses the same standard deviation gates. The pachymetric values differed by 4.42 and 7.72 standard deviations for stages 2 and 3, respectively. A new keratoconus staging incorporates anterior and posterior curvature, thinnest pachymetric values, and distance visual acuity and consists of stages 0-4 (5 stages). The proposed system closely matches the existing AK classification stages 1-4 on anterior curvature. As it incorporates posterior curvature and thickness measurements based on the thinnest point, rather than apical measurements, the new staging system better reflects the anatomical changes seen in keratoconus. Georg Thieme Verlag KG Stuttgart · New York.
VizieR Online Data Catalog: Wide binaries in Tycho-Gaia: search method (Andrews+, 2017)
NASA Astrophysics Data System (ADS)
Andrews, J. J.; Chaname, J.; Agueros, M. A.
2017-11-01
Our catalogue of wide binaries identified in the Tycho-Gaia Astrometric Solution catalogue. The Gaia source IDs, Tycho IDs, astrometry, posterior probabilities for both the log-flat prior and power-law prior models, and angular separation are presented. (1 data file).
Stotts, Steven A; Koch, Robert A
2017-08-01
In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M
2011-07-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I.; Marcotte, Edward M.
2011-01-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for all possible PSMs and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for all detected proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses. PMID:21488652
Bayesian adaptive phase II screening design for combination trials.
Cai, Chunyan; Yuan, Ying; Johnson, Valen E
2013-01-01
Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multiarm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while providing higher power to identify the best treatment at the end of the trial.
Confidence level estimation in multi-target classification problems
NASA Astrophysics Data System (ADS)
Chang, Shi; Isaacs, Jason; Fu, Bo; Shin, Jaejeong; Zhu, Pingping; Ferrari, Silvia
2018-04-01
This paper presents an approach for estimating the confidence level in automatic multi-target classification performed by an imaging sensor on an unmanned vehicle. An automatic target recognition algorithm comprised of a deep convolutional neural network in series with a support vector machine classifier detects and classifies targets based on the image matrix. The joint posterior probability mass function of target class, features, and classification estimates is learned from labeled data, and recursively updated as additional images become available. Based on the learned joint probability mass function, the approach presented in this paper predicts the expected confidence level of future target classifications, prior to obtaining new images. The proposed approach is tested with a set of simulated sonar image data. The numerical results show that the estimated confidence level provides a close approximation to the actual confidence level value determined a posteriori, i.e. after the new image is obtained by the on-board sensor. Therefore, the expected confidence level function presented in this paper can be used to adaptively plan the path of the unmanned vehicle so as to optimize the expected confidence levels and ensure that all targets are classified with satisfactory confidence after the path is executed.
Takahashi, Kenji; Hashimoto, Sanshiro; Nakamura, Hiroshi; Mori, Atsushi; Sato, Akiko; Majima, Tokifumi; Takai, Shinro
2015-06-01
This study aimed to identify factors on routine pulse sequence MRI associated with cartilage degeneration observed on T1ρ relaxation mapping. This study included 137 subjects with knee pain. T1ρ values were measured in the regions of interest on the surface layer of the cartilage on mid-coronal images of the femorotibial joint. Assessment of cartilage, subchondral bone, meniscus and ligaments was performed using routine pulse sequence MRI. Radiographic evaluation for osteoarthritis was also performed. Multiple regression analysis revealed posterior root/horn tears to be independent factors increasing the T1ρ values of the cartilage in the medial compartment of the femorotibial joint. Even when adjusted for radiographically defined early-stage osteoarthritis, medial posterior meniscal radial tears significantly increased the T1ρ values. This study showed that posterior root/horn radial tears in the medial meniscus are particularly important MRI findings associated with cartilage degeneration observed on T1ρ relaxation mapping. Morphological factors of the medial meniscus on MRI provide findings useful for screening early-stage osteoarthritis. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Chen, Qiuhong; Zheng, Yu; Li, Ye; Zeng, Ying; Kuang, Jianchao; Hou, Shixiang; Li, Xiaohui
2012-05-01
The aim of the present work was to evaluate the effect of deacetylated gellan gum on delivering hydrophilic drug to the posterior segment of the eye. An aesculin-containing in situ gel based on deacetylated gellan gum (AG) was prepared and characterized. In vitro corneal permeation across isolated rabbit cornea of aesculin between AG and aesculin solution (AS) was compared. The results showed that deacetylated gellan gum promotes corneal penetration of aesculin. Pharmacokinetics and ocular tissue distribution of aesculin after topical administration in rabbit eye showed that AG greatly improved aesculin accumulation in posterior segmentsrelative to AS, which was probably attributed to conjunctivital/sclera pathway. The area-under-the-curve (AUC) for AG in aqueous humor, choroid-retina, sclera and iris-ciliary body were significantly larger than those of AS. AG can be used as a potential carrier for broading the application of aesculin.
CKS knee prosthesis: biomechanics and clinical results in 42 cases.
Martucci, E; Verni, E; Del Prete, G; Stulberg, S D
1996-01-01
From 1991 to 1993 a total of 42 CKS prostheses were implanted for the following reasons: osteoarthrosis (34 cases), rheumatoid arthritis (7 cases) tibial necrosis (1 case). At follow-up obtained after 17 to 41 months the results were: excellent or good: 41; the only poor result was probably related to excessive tension of the posterior cruciate ligament. 94% of the patients reported complete regression of pain, 85% was capable of going up and down stairs without support. Mean joint flexion was 105 degrees. Radiologically the anatomical axis of the knee had a mean valgus of anatomical axis of the knee had a mean valgus of 6 degrees. The prosthetic components were always cemented. The posterior cruciate ligament was removed in 7 knees, so that the prosthesis with "posterior stability" was used. The patella was never prosthetized. One patient complained of peri-patellar pain two months after surgery which then regressed completely.
NASA Astrophysics Data System (ADS)
Krumholz, Mark R.; Adamo, Angela; Fumagalli, Michele; Wofford, Aida; Calzetti, Daniela; Lee, Janice C.; Whitmore, Bradley C.; Bright, Stacey N.; Grasha, Kathryn; Gouliermis, Dimitrios A.; Kim, Hwihyun; Nair, Preethi; Ryon, Jenna E.; Smith, Linda J.; Thilker, David; Ubeda, Leonardo; Zackrisson, Erik
2015-10-01
We investigate a novel Bayesian analysis method, based on the Stochastically Lighting Up Galaxies (slug) code, to derive the masses, ages, and extinctions of star clusters from integrated light photometry. Unlike many analysis methods, slug correctly accounts for incomplete initial mass function (IMF) sampling, and returns full posterior probability distributions rather than simply probability maxima. We apply our technique to 621 visually confirmed clusters in two nearby galaxies, NGC 628 and NGC 7793, that are part of the Legacy Extragalactic UV Survey (LEGUS). LEGUS provides Hubble Space Telescope photometry in the NUV, U, B, V, and I bands. We analyze the sensitivity of the derived cluster properties to choices of prior probability distribution, evolutionary tracks, IMF, metallicity, treatment of nebular emission, and extinction curve. We find that slug's results for individual clusters are insensitive to most of these choices, but that the posterior probability distributions we derive are often quite broad, and sometimes multi-peaked and quite sensitive to the choice of priors. In contrast, the properties of the cluster population as a whole are relatively robust against all of these choices. We also compare our results from slug to those derived with a conventional non-stochastic fitting code, Yggdrasil. We show that slug's stochastic models are generally a better fit to the observations than the deterministic ones used by Yggdrasil. However, the overall properties of the cluster populations recovered by both codes are qualitatively similar.
Serfling, Robert; Ogola, Gerald
2016-02-10
Among men, prostate cancer (CaP) is the most common newly diagnosed cancer and the second leading cause of death from cancer. A major issue of very large scale is avoiding both over-treatment and under-treatment of CaP cases. The central challenge is deciding clinical significance or insignificance when the CaP biopsy results are positive but only marginally so. A related concern is deciding how to increase the number of biopsy cores for larger prostates. As a foundation for improved choice of number of cores and improved interpretation of biopsy results, we develop a probability model for the number of positive cores found in a biopsy, given the total number of cores, the volumes of the tumor nodules, and - very importantly - the prostate volume. Also, three applications are carried out: guidelines for the number of cores as a function of prostate volume, decision rules for insignificant versus significant CaP using number of positive cores, and, using prior distributions on total tumor size, Bayesian posterior probabilities for insignificant CaP and posterior median CaP. The model-based results have generality of application, take prostate volume into account, and provide attractive tradeoffs of specificity versus sensitivity. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Robust Bayesian Experimental Design for Conceptual Model Discrimination
NASA Astrophysics Data System (ADS)
Pham, H. V.; Tsai, F. T. C.
2015-12-01
A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.
Monteiro, Emiliano C; Tamaki, Fábio K; Terra, Walter R; Ribeiro, Alberto F
2014-03-01
This work presents a detailed morphofunctional study of the digestive system of a phasmid representative, Cladomorphus phyllinus. Cells from anterior midgut exhibit a merocrine secretion, whereas posterior midgut cells show a microapocrine secretion. A complex system of midgut tubules is observed in the posterior midgut which is probably related to the luminal alkalization of this region. Amaranth dye injection into the haemolymph and orally feeding insects with dye indicated that the anterior midgut is water-absorbing, whereas the Malpighian tubules are the main site of water secretion. Thus, a putative counter-current flux of fluid from posterior to anterior midgut may propel enzyme digestive recycling, confirmed by the low rate of enzyme excretion. The foregut and anterior midgut present an acidic pH (5.3 and 5.6, respectively), whereas the posterior midgut is highly alkaline (9.1) which may be related to the digestion of hemicelluloses. Most amylase, trypsin and chymotrypsin activities occur in the foregut and anterior midgut. Maltase is found along the midgut associated with the microvillar glycocalix, while aminopeptidase occurs in the middle and posterior midgut in membrane bound forms. Both amylase and trypsin are secreted mainly by the anterior midgut through an exocytic process as revealed by immunocytochemical data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Spalled, aerodynamically modified moldavite from Slavice, Moravia, Czechoslovakia
Chao, E.C.T.
1964-01-01
A Czechoslovakian tektite or moldavite shows clear, indirect evidence of aerodynamic ablation. This large tektite has the shape of a teardrop, with a strongly convex, deeply corroded, but clearly identifiable front and a planoconvex, relatively smooth, posterior surface. In spite of much erosion and corrosion, demarcation of the posterior and the anterior part of the specimen (the keel) is clearly preserved locally. This specimen provides the first tangible evidence that moldavites entered the atmosphere cold, probably at a velocity exceeding 5 kilometers per second; the result was selective heating of the anterior face and perhaps ablation during the second melting. This provides evidence of the extraterrestial origin of moldavites.
Some Simple Formulas for Posterior Convergence Rates
2014-01-01
We derive some simple relations that demonstrate how the posterior convergence rate is related to two driving factors: a “penalized divergence” of the prior, which measures the ability of the prior distribution to propose a nonnegligible set of working models to approximate the true model and a “norm complexity” of the prior, which measures the complexity of the prior support, weighted by the prior probability masses. These formulas are explicit and involve no essential assumptions and are easy to apply. We apply this approach to the case with model averaging and derive some useful oracle inequalities that can optimize the performance adaptively without knowing the true model. PMID:27379278
A new prior for bayesian anomaly detection: application to biosurveillance.
Shen, Y; Cooper, G F
2010-01-01
Bayesian anomaly detection computes posterior probabilities of anomalous events by combining prior beliefs and evidence from data. However, the specification of prior probabilities can be challenging. This paper describes a Bayesian prior in the context of disease outbreak detection. The goal is to provide a meaningful, easy-to-use prior that yields a posterior probability of an outbreak that performs at least as well as a standard frequentist approach. If this goal is achieved, the resulting posterior could be usefully incorporated into a decision analysis about how to act in light of a possible disease outbreak. This paper describes a Bayesian method for anomaly detection that combines learning from data with a semi-informative prior probability over patterns of anomalous events. A univariate version of the algorithm is presented here for ease of illustration of the essential ideas. The paper describes the algorithm in the context of disease-outbreak detection, but it is general and can be used in other anomaly detection applications. For this application, the semi-informative prior specifies that an increased count over baseline is expected for the variable being monitored, such as the number of respiratory chief complaints per day at a given emergency department. The semi-informative prior is derived based on the baseline prior, which is estimated from using historical data. The evaluation reported here used semi-synthetic data to evaluate the detection performance of the proposed Bayesian method and a control chart method, which is a standard frequentist algorithm that is closest to the Bayesian method in terms of the type of data it uses. The disease-outbreak detection performance of the Bayesian method was statistically significantly better than that of the control chart method when proper baseline periods were used to estimate the baseline behavior to avoid seasonal effects. When using longer baseline periods, the Bayesian method performed as well as the control chart method. The time complexity of the Bayesian algorithm is linear in the number of the observed events being monitored, due to a novel, closed-form derivation that is introduced in the paper. This paper introduces a novel prior probability for Bayesian outbreak detection that is expressive, easy-to-apply, computationally efficient, and performs as well or better than a standard frequentist method.
Sonic Hedgehog Signaling in Limb Development
Tickle, Cheryll; Towers, Matthew
2017-01-01
The gene encoding the secreted protein Sonic hedgehog (Shh) is expressed in the polarizing region (or zone of polarizing activity), a small group of mesenchyme cells at the posterior margin of the vertebrate limb bud. Detailed analyses have revealed that Shh has the properties of the long sought after polarizing region morphogen that specifies positional values across the antero-posterior axis (e.g., thumb to little finger axis) of the limb. Shh has also been shown to control the width of the limb bud by stimulating mesenchyme cell proliferation and by regulating the antero-posterior length of the apical ectodermal ridge, the signaling region required for limb bud outgrowth and the laying down of structures along the proximo-distal axis (e.g., shoulder to digits axis) of the limb. It has been shown that Shh signaling can specify antero-posterior positional values in limb buds in both a concentration- (paracrine) and time-dependent (autocrine) fashion. Currently there are several models for how Shh specifies positional values over time in the limb buds of chick and mouse embryos and how this is integrated with growth. Extensive work has elucidated downstream transcriptional targets of Shh signaling. Nevertheless, it remains unclear how antero-posterior positional values are encoded and then interpreted to give the particular structure appropriate to that position, for example, the type of digit. A distant cis-regulatory enhancer controls limb-bud-specific expression of Shh and the discovery of increasing numbers of interacting transcription factors indicate complex spatiotemporal regulation. Altered Shh signaling is implicated in clinical conditions with congenital limb defects and in the evolution of the morphological diversity of vertebrate limbs. PMID:28293554
Bayesian Estimation of the DINA Model with Gibbs Sampling
ERIC Educational Resources Information Center
Culpepper, Steven Andrew
2015-01-01
A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…
BAT - The Bayesian analysis toolkit
NASA Astrophysics Data System (ADS)
Caldwell, Allen; Kollár, Daniel; Kröninger, Kevin
2009-11-01
We describe the development of a new toolkit for data analysis. The analysis package is based on Bayes' Theorem, and is realized with the use of Markov Chain Monte Carlo. This gives access to the full posterior probability distribution. Parameter estimation, limit setting and uncertainty propagation are implemented in a straightforward manner.
Bayesian Posterior Odds Ratios: Statistical Tools for Collaborative Evaluations
ERIC Educational Resources Information Center
Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon
2018-01-01
To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…
NASA Astrophysics Data System (ADS)
de Wit, Ralph W. L.; Valentine, Andrew P.; Trampert, Jeannot
2013-10-01
How do body-wave traveltimes constrain the Earth's radial (1-D) seismic structure? Existing 1-D seismological models underpin 3-D seismic tomography and earthquake location algorithms. It is therefore crucial to assess the quality of such 1-D models, yet quantifying uncertainties in seismological models is challenging and thus often ignored. Ideally, quality assessment should be an integral part of the inverse method. Our aim in this study is twofold: (i) we show how to solve a general Bayesian non-linear inverse problem and quantify model uncertainties, and (ii) we investigate the constraint on spherically symmetric P-wave velocity (VP) structure provided by body-wave traveltimes from the EHB bulletin (phases Pn, P, PP and PKP). Our approach is based on artificial neural networks, which are very common in pattern recognition problems and can be used to approximate an arbitrary function. We use a Mixture Density Network to obtain 1-D marginal posterior probability density functions (pdfs), which provide a quantitative description of our knowledge on the individual Earth parameters. No linearization or model damping is required, which allows us to infer a model which is constrained purely by the data. We present 1-D marginal posterior pdfs for the 22 VP parameters and seven discontinuity depths in our model. P-wave velocities in the inner core, outer core and lower mantle are resolved well, with standard deviations of ˜0.2 to 1 per cent with respect to the mean of the posterior pdfs. The maximum likelihoods of VP are in general similar to the corresponding ak135 values, which lie within one or two standard deviations from the posterior means, thus providing an independent validation of ak135 in this part of the radial model. Conversely, the data contain little or no information on P-wave velocity in the D'' layer, the upper mantle and the homogeneous crustal layers. Further, the data do not constrain the depth of the discontinuities in our model. Using additional phases available in the ISC bulletin, such as PcP, PKKP and the converted phases SP and ScP, may enhance the resolvability of these parameters. Finally, we show how the method can be extended to obtain a posterior pdf for a multidimensional model space. This enables us to investigate correlations between model parameters.
Pala, Kanşad; Tekçe, Neslihan; Tuncer, Safa; Serim, Merve Efe; Demirci, Mustafa
2016-01-01
The objectives of this study were to evaluate the mechanical and physical properties of resin composites. The materials evaluated were the Clearfil Majesty Posterior, Filtek Z550 and G-aenial Posterior composites. A total of 189 specimens were fabricated for microhardness, roughness, gloss and color tests. The specimens were divided into three finishing and polishing systems: Enhance, OneGloss and Sof-Lex Spiral. Microhardness, roughness, gloss and color were measured after 24 h and after 10,000 thermocycles. Two samples from each group were evaluated using SEM and AFM. G-aenial Posterior exhibited the lowest microhardness values. The mean roughness ranged from 0.37 to 0.61 µm. The smoothest surfaces were obtained with Sof-Lex Spiral for each material. G-aenial Posterior with Enhance was determined to be the glossiest surfaces. All of the materials exhibited similar ΔE values ranging between 1.69 and 2.75. Sof-Lex Spiral discs could be used successfully to polish composites.
Target intersection probabilities for parallel-line and continuous-grid types of search
McCammon, R.B.
1977-01-01
The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.
Paleogeodesy of the Southern Santa Cruz Mountains Frontal Thrusts, Silicon Valley, CA
NASA Astrophysics Data System (ADS)
Aron, F.; Johnstone, S. A.; Mavrommatis, A. P.; Sare, R.; Hilley, G. E.
2015-12-01
We present a method to infer long-term fault slip rate distributions using topography by coupling a three-dimensional elastic boundary element model with a geomorphic incision rule. In particular, we used a 10-m-resolution digital elevation model (DEM) to calculate channel steepness (ksn) throughout the actively deforming southern Santa Cruz Mountains in Central California. We then used these values with a power-law incision rule and the Poly3D code to estimate slip rates over seismogenic, kilometer-scale thrust faults accommodating differential uplift of the relief throughout geologic time. Implicit in such an analysis is the assumption that the topographic surface remains unchanged over time as rock is uplifted by slip on the underlying structures. The fault geometries within the area are defined based on surface mapping, as well as active and passive geophysical imaging. Fault elements are assumed to be traction-free in shear (i.e., frictionless), while opening along them is prohibited. The free parameters in the inversion include the components of the remote strain-rate tensor (ɛij) and the bedrock resistance to channel incision (K), which is allowed to vary according to the mapped distribution of geologic units exposed at the surface. The nonlinear components of the geomorphic model required the use of a Markov chain Monte Carlo method, which simulated the posterior density of the components of the remote strain-rate tensor and values of K for the different mapped geologic units. Interestingly, posterior probability distributions of ɛij and K fall well within the broad range of reported values, suggesting that the joint use of elastic boundary element and geomorphic models may have utility in estimating long-term fault slip-rate distributions. Given an adequate DEM, geologic mapping, and fault models, the proposed paleogeodetic method could be applied to other crustal faults with geological and morphological expressions of long-term uplift.
Messner, Alina; Stelzeneder, David; Trattnig, Stefan; Welsch, Götz H; Schinhan, Martina; Apprich, Sebastian; Brix, Martin; Windhager, Reinhard; Trattnig, Siegfried
2017-03-01
Indicating lumbar disc herniation via magnetic resonance imaging (MRI) T2 mapping in the posterior annulus fibrosus (AF). Sagittal T2 maps of 313 lumbar discs of 64 patients with low back pain were acquired at 3.0 Tesla (3T). The discs were rated according to disc herniation and bulging. Region of interest (ROI) analysis was performed on median, sagittal T2 maps. T2 values of the AF, in the most posterior 10% (PAF-10) and 20% of the disc (PAF-20), were compared. A significant increase in the T2 values of discs with herniations affecting the imaged area, compared to bulging discs and discs with lateral herniation, was shown in the PAF-10, where no association to the NP was apparent. The PAF-20 exhibited a moderate correlation to the nucleus pulposus (NP). High T2 values in the PAF-10 suggest the presence of disc herniation (DH). The results indicate that T2 values in the PAF-20 correspond more to changes in the NP.
Development of a Random Field Model for Gas Plume Detection in Multiple LWIR Images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heasler, Patrick G.
This report develops a random field model that describes gas plumes in LWIR remote sensing images. The random field model serves as a prior distribution that can be combined with LWIR data to produce a posterior that determines the probability that a gas plume exists in the scene and also maps the most probable location of any plume. The random field model is intended to work with a single pixel regression estimator--a regression model that estimates gas concentration on an individual pixel basis.
Ockham's razor and Bayesian analysis. [statistical theory for systems evaluation
NASA Technical Reports Server (NTRS)
Jefferys, William H.; Berger, James O.
1992-01-01
'Ockham's razor', the ad hoc principle enjoining the greatest possible simplicity in theoretical explanations, is presently shown to be justifiable as a consequence of Bayesian inference; Bayesian analysis can, moreover, clarify the nature of the 'simplest' hypothesis consistent with the given data. By choosing the prior probabilities of hypotheses, it becomes possible to quantify the scientific judgment that simpler hypotheses are more likely to be correct. Bayesian analysis also shows that a hypothesis with fewer adjustable parameters intrinsically possesses an enhanced posterior probability, due to the clarity of its predictions.
NASA Astrophysics Data System (ADS)
Yavuz, Hande; Bai, Jinbo
2018-06-01
This paper deals with the dielectric barrier discharge assisted continuous plasma polypyrrole deposition on CNT-grafted carbon fibers for conductive composite applications. The simultaneous effects of three controllable factors have been studied on the electrical resistivity (ER) of these two material systems based on multivariate experimental design methodology. A posterior probability referring to Benjamini-Hochberg (BH) false discovery rate was explored as multiple testing corrections of the t-test p values. BH significance threshold of 0.05 was produced truly statistically significant coefficients to describe ER of two material systems. A group of plasma modified samples was chosen to be used for composite manufacturing to drive an assessment of interlaminar shear properties under static loading. Transversal and longitudinal electrical resistivity (DC, ω =0) of composite samples were studied to compare both the effects of CNT grafting and plasma modification on ER of resultant composites.
NASA Astrophysics Data System (ADS)
Yavuz, Hande; Bai, Jinbo
2017-09-01
This paper deals with the dielectric barrier discharge assisted continuous plasma polypyrrole deposition on CNT-grafted carbon fibers for conductive composite applications. The simultaneous effects of three controllable factors have been studied on the electrical resistivity (ER) of these two material systems based on multivariate experimental design methodology. A posterior probability referring to Benjamini-Hochberg (BH) false discovery rate was explored as multiple testing corrections of the t-test p values. BH significance threshold of 0.05 was produced truly statistically significant coefficients to describe ER of two material systems. A group of plasma modified samples was chosen to be used for composite manufacturing to drive an assessment of interlaminar shear properties under static loading. Transversal and longitudinal electrical resistivity (DC, ω =0) of composite samples were studied to compare both the effects of CNT grafting and plasma modification on ER of resultant composites.
Sayers, Adrian; Ben-Shlomo, Yoav; Blom, Ashley W; Steele, Fiona
2016-01-01
Abstract Studies involving the use of probabilistic record linkage are becoming increasingly common. However, the methods underpinning probabilistic record linkage are not widely taught or understood, and therefore these studies can appear to be a ‘black box’ research tool. In this article, we aim to describe the process of probabilistic record linkage through a simple exemplar. We first introduce the concept of deterministic linkage and contrast this with probabilistic linkage. We illustrate each step of the process using a simple exemplar and describe the data structure required to perform a probabilistic linkage. We describe the process of calculating and interpreting matched weights and how to convert matched weights into posterior probabilities of a match using Bayes theorem. We conclude this article with a brief discussion of some of the computational demands of record linkage, how you might assess the quality of your linkage algorithm, and how epidemiologists can maximize the value of their record-linked research using robust record linkage methods. PMID:26686842
Two new species of Lactarius associated with Alnus acuminata subsp. arguta in Mexico.
Montoya, Leticia; Bandala, Victor M; Garay, Edith
2014-01-01
In pure stands of Alnus acuminata subsp. arguta trees from Sierra Norte de Puebla (central Mexico) two undescribed ectomycorrhizal species of Lactarius were discovered. Distinction of the two new species is based on morphological characters and supported with phylogenetic analyses of the nuclear ribosomal DNA ITS region and part of the gene that encodes for the second largest subunit of RNA polymerase II (rpb2). The phylogenies inferred recovered the two species in different clades strongly supported by posterior probabilities and bootstrap values. The new Lactarius species are recognized as part of the assemblage of ectomycorrhizal fungi associated with Alnus acuminata. Information about these taxa includes the morphological variation achieved along 16 monitories 2010-2013. Descriptions are provided. They are accompanied by photos including SEM photomicrographs of basidiospores and information on differences between them and other related taxa from Europe and the United States. © 2014 by The Mycological Society of America.
On the statistical properties of viral misinformation in online social media
NASA Astrophysics Data System (ADS)
Bessi, Alessandro
2017-03-01
The massive diffusion of online social media allows for the rapid and uncontrolled spreading of conspiracy theories, hoaxes, unsubstantiated claims, and false news. Such an impressive amount of misinformation can influence policy preferences and encourage behaviors strongly divergent from recommended practices. In this paper, we study the statistical properties of viral misinformation in online social media. By means of methods belonging to Extreme Value Theory, we show that the number of extremely viral posts over time follows a homogeneous Poisson process, and that the interarrival times between such posts are independent and identically distributed, following an exponential distribution. Moreover, we characterize the uncertainty around the rate parameter of the Poisson process through Bayesian methods. Finally, we are able to derive the predictive posterior probability distribution of the number of posts exceeding a certain threshold of shares over a finite interval of time.
Simulating large-scale crop yield by using perturbed-parameter ensemble method
NASA Astrophysics Data System (ADS)
Iizumi, T.; Yokozawa, M.; Sakurai, G.; Nishimori, M.
2010-12-01
Toshichika Iizumi, Masayuki Yokozawa, Gen Sakurai, Motoki Nishimori Agro-Meteorology Division, National Institute for Agro-Environmental Sciences, Japan Abstract One of concerning issues of food security under changing climate is to predict the inter-annual variation of crop production induced by climate extremes and modulated climate. To secure food supply for growing world population, methodology that can accurately predict crop yield on a large scale is needed. However, for developing a process-based large-scale crop model with a scale of general circulation models (GCMs), 100 km in latitude and longitude, researchers encounter the difficulties in spatial heterogeneity of available information on crop production such as cultivated cultivars and management. This study proposed an ensemble-based simulation method that uses a process-based crop model and systematic parameter perturbation procedure, taking maize in U.S., China, and Brazil as examples. The crop model was developed modifying the fundamental structure of the Soil and Water Assessment Tool (SWAT) to incorporate the effect of heat stress on yield. We called the new model PRYSBI: the Process-based Regional-scale Yield Simulator with Bayesian Inference. The posterior probability density function (PDF) of 17 parameters, which represents the crop- and grid-specific features of the crop and its uncertainty under given data, was estimated by the Bayesian inversion analysis. We then take 1500 ensemble members of simulated yield values based on the parameter sets sampled from the posterior PDF to describe yearly changes of the yield, i.e. perturbed-parameter ensemble method. The ensemble median for 27 years (1980-2006) was compared with the data aggregated from the county yield. On a country scale, the ensemble median of the simulated yield showed a good correspondence with the reported yield: the Pearson’s correlation coefficient is over 0.6 for all countries. In contrast, on a grid scale, the correspondence is still high in most grids regardless of the countries. However, the model showed comparatively low reproducibility in the slope areas, such as around the Rocky Mountains in South Dakota, around the Great Xing'anling Mountains in Heilongjiang, and around the Brazilian Plateau. As there is a wide-ranging local climate conditions in the complex terrain, such as the slope of mountain, the GCM grid-scale weather inputs is likely one of major sources of error. The results of this study highlight the benefits of the perturbed-parameter ensemble method in simulating crop yield on a GCM grid scale: (1) the posterior PDF of parameter could quantify the uncertainty of parameter value of the crop model associated with the local crop production aspects; (2) the method can explicitly account for the uncertainty of parameter value in the crop model simulations; (3) the method achieve a Monte Carlo approximation of probability of sub-grid scale yield, accounting for the nonlinear response of crop yield to weather and management; (4) the method is therefore appropriate to aggregate the simulated sub-grid scale yields to a grid-scale yield and it may be a reason for high performance of the model in capturing inter-annual variation of yield.
Measurement of Posterior Tibial Slope Using Magnetic Resonance Imaging.
Karimi, Elham; Norouzian, Mohsen; Birjandinejad, Ali; Zandi, Reza; Makhmalbaf, Hadi
2017-11-01
Posterior tibial slope (PTS) is an important factor in the knee joint biomechanics and one of the bone features, which affects knee joint stability. Posterior tibial slope has impact on flexion gap, knee joint stability and posterior femoral rollback that are related to wide range of knee motion. During high tibial osteotomy and total knee arthroplasty (TKA) surgery, proper retaining the mechanical and anatomical axis is important. The aim of this study was to evaluate the value of posterior tibial slope in medial and lateral compartments of tibial plateau and to assess the relationship among the slope with age, gender and other variables of tibial plateau surface. This descriptive study was conducted on 132 healthy knees (80 males and 52 females) with a mean age of 38.26±11.45 (20-60 years) at Imam Reza hospital in Mashhad, Iran. All patients, selected and enrolled for MRI in this study, were admitted for knee pain with uncertain clinical history. According to initial physical knee examinations the study subjects were reported healthy. The mean posterior tibial slope was 7.78± 2.48 degrees in the medial compartment and 6.85± 2.24 degrees in lateral compartment. No significant correlation was found between age and gender with posterior tibial slope ( P ≥0.05), but there was significant relationship among PTS with mediolateral width, plateau area and medial plateau. Comparison of different studies revealed that the PTS value in our study is different from other communities, which can be associated with genetic and racial factors. The results of our study are useful to PTS reconstruction in surgeries.
NASA Astrophysics Data System (ADS)
Post, Hanna; Vrugt, Jasper A.; Fox, Andrew; Vereecken, Harry; Hendricks Franssen, Harrie-Jan
2017-03-01
The Community Land Model (CLM) contains many parameters whose values are uncertain and thus require careful estimation for model application at individual sites. Here we used Bayesian inference with the DiffeRential Evolution Adaptive Metropolis (DREAM(zs)) algorithm to estimate eight CLM v.4.5 ecosystem parameters using 1 year records of half-hourly net ecosystem CO2 exchange (NEE) observations of four central European sites with different plant functional types (PFTs). The posterior CLM parameter distributions of each site were estimated per individual season and on a yearly basis. These estimates were then evaluated using NEE data from an independent evaluation period and data from "nearby" FLUXNET sites at 600 km distance to the original sites. Latent variables (multipliers) were used to treat explicitly uncertainty in the initial carbon-nitrogen pools. The posterior parameter estimates were superior to their default values in their ability to track and explain the measured NEE data of each site. The seasonal parameter values reduced with more than 50% (averaged over all sites) the bias in the simulated NEE values. The most consistent performance of CLM during the evaluation period was found for the posterior parameter values of the forest PFTs, and contrary to the C3-grass and C3-crop sites, the latent variables of the initial pools further enhanced the quality-of-fit. The carbon sink function of the forest PFTs significantly increased with the posterior parameter estimates. We thus conclude that land surface model predictions of carbon stocks and fluxes require careful consideration of uncertain ecological parameters and initial states.
Wermker, Kai; Lünenbürger, Henning; Joos, Ulrich; Kleinheinz, Johannes; Jung, Susanne
2014-07-01
Velopharyngeal insufficiency (VPI) can be caused by a variety of disorders. The most common cause of VPI is the association with cleft palate. The aim of this study was to evaluate the effectiveness of different surgical techniques for cleft palate patients with VPI: (1) velopharyngoplasty with an inferiorly based posterior pharyngeal flap (VPP posterior, Schönborn-Rosenthal), and (2) combination of VPP posterior and push-back operation (Dorrance). 41 subjects (26 females, 15 males) with VPI were analysed. Hypernasality was judged subjectively and nasalance data were assessed objectively using the NasalView system preoperative and 6 months postoperative. Subjective analysis showed improved speech results regarding hypernasality for all OP-techniques with good results for VPP posterior and VPP posterior combined with push-back with success rates of 94.4% and 87.7%, respectively. Objective analysis showed a statistically significant reduction of nasalance for both VPP posterior and VPP posterior combined with push-back (p < 0.01). However, there were no statistically significant differences concerning measured nasalance values postoperatively between the VPP posterior and VPP posterior combined with push-back. Based on our findings, both VPP posterior and VPP posterior combined with push-back showed good results in correction of hypernasality in cleft patients with velopharyngeal insufficiency. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
QTL fine mapping with Bayes C(π): a simulation study.
van den Berg, Irene; Fritz, Sébastien; Boichard, Didier
2013-06-19
Accurate QTL mapping is a prerequisite in the search for causative mutations. Bayesian genomic selection models that analyse many markers simultaneously should provide more accurate QTL detection results than single-marker models. Our objectives were to (a) evaluate by simulation the influence of heritability, number of QTL and number of records on the accuracy of QTL mapping with Bayes Cπ and Bayes C; (b) estimate the QTL status (homozygous vs. heterozygous) of the individuals analysed. This study focussed on the ten largest detected QTL, assuming they are candidates for further characterization. Our simulations were based on a true dairy cattle population genotyped for 38,277 phased markers. Some of these markers were considered biallelic QTL and used to generate corresponding phenotypes. Different numbers of records (4387 and 1500), heritability values (0.1, 0.4 and 0.7) and numbers of QTL (10, 100 and 1000) were studied. QTL detection was based on the posterior inclusion probability for individual markers, or on the sum of the posterior inclusion probabilities for consecutive markers, estimated using Bayes C or Bayes Cπ. The QTL status of the individuals was derived from the contrast between the sums of the SNP allelic effects of their chromosomal segments. The proportion of markers with null effect (π) frequently did not reach convergence, leading to poor results for Bayes Cπ in QTL detection. Fixing π led to better results. Detection of the largest QTL was most accurate for medium to high heritability, for low to moderate numbers of QTL, and with a large number of records. The QTL status was accurately inferred when the distribution of the contrast between chromosomal segment effects was bimodal. QTL detection is feasible with Bayes C. For QTL detection, it is recommended to use a large dataset and to focus on highly heritable traits and on the largest QTL. QTL statuses were inferred based on the distribution of the contrast between chromosomal segment effects.
Graves, Stephen; Sedrakyan, Art; Baste, Valborg; Gioe, Terence J; Namba, Robert; Martínez Cruz, Olga; Stea, Susanna; Paxton, Elizabeth; Banerjee, Samprit; Isaacs, Abby J; Robertsson, Otto
2014-12-17
Posterior-stabilized total knee prostheses were introduced to address instability secondary to loss of posterior cruciate ligament function, and they have either fixed or mobile bearings. Mobile bearings were developed to improve the function and longevity of total knee prostheses. In this study, the International Consortium of Orthopaedic Registries used a distributed health data network to study a large cohort of posterior-stabilized prostheses to determine if the outcome of a posterior-stabilized total knee prosthesis differs depending on whether it has a fixed or mobile-bearing design. Aggregated registry data were collected with a distributed health data network that was developed by the International Consortium of Orthopaedic Registries to reduce barriers to participation (e.g., security, proprietary, legal, and privacy issues) that have the potential to occur with the alternate centralized data warehouse approach. A distributed health data network is a decentralized model that allows secure storage and analysis of data from different registries. Each registry provided data on mobile and fixed-bearing posterior-stabilized prostheses implanted between 2001 and 2010. Only prostheses associated with primary total knee arthroplasties performed for the treatment of osteoarthritis were included. Prostheses with all types of fixation were included except for those with the rarely used reverse hybrid (cementless tibial and cemented femoral components) fixation. The use of patellar resurfacing was reported. The outcome of interest was time to first revision (for any reason). Multivariate meta-analysis was performed with linear mixed models with survival probability as the unit of analysis. This study includes 137,616 posterior-stabilized knee prostheses; 62% were in female patients, and 17.6% had a mobile bearing. The results of the fixed-effects model indicate that in the first year the mobile-bearing posterior-stabilized prostheses had a significantly higher hazard ratio (1.86) than did the fixed-bearing posterior-stabilized prostheses (95% confidence interval, 1.28 to 2.7; p = 0.001). For all other time intervals, the mobile-bearing posterior-stabilized prostheses had higher hazard ratios; however, these differences were not significant. Mobile-bearing posterior-stabilized prostheses had an increased rate of revision compared with fixed-bearing posterior-stabilized prostheses. This difference was evident in the first year. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
Augusto, Kathiane Lustosa; Bezerra, Leonardo Robson Pinheiro Sobreira; Murad-Regadas, Sthela Maria; Vasconcelos Neto, José Ananias; Vasconcelos, Camila Teixeira Moreira; Karbage, Sara Arcanjo Lino; Bilhar, Andreisa Paiva Monteiro; Regadas, Francisco Sérgio Pinheiro
2017-07-01
Pelvic Floor Dysfunction is a complex condition that may be asymptomatic or may involve a loto f symptoms. This study evaluates defecatory dysfunction, fecal incontinence, and quality of life in relation to presence of posterior vaginal prolapse. 265 patients were divided into two groups according to posterior POP-Q stage: posterior POP-Q stage ≥2 and posterior POP-Q stage <2. The two groups were compared regarding demographic and clinical data; overall POP-Q stage, percentage of patients with defecatory dysfunction, percentage of patients with fecal incontinence, pelvic floor muscle strength, and quality of life scores. The correlation between severity of the prolapse and severity of constipation was calculated using ρ de Spearman (rho). Women with Bp stage ≥2 were significantly older and had significantly higher BMI, numbers of pregnancies and births, and overall POP-Q stage than women with stage <2. No significant differences between the groups were observed regarding proportion of patients with defecatory dysfunction or incontinence, pelvic floor muscle strength, quality of life (ICIQ-SF), or sexual impact (PISQ-12). POP-Q stage did not correlate with severity of constipation and incontinence. General quality of life perception on the SF-36 was significantly worse in patients with POP-Q stage ≥2 than in those with POP-Q stage <2. The lack of a clinically important association between the presence of posterior vaginal prolapse and symptoms of constipation or anal incontinence leads us to agree with the conclusion that posterior vaginal prolapse probably is not an independent cause defecatory dysfunction or fecal incontinence. Copyright © 2017 Elsevier B.V. All rights reserved.
Bayesian Retrieval of Complete Posterior PDFs of Oceanic Rain Rate From Microwave Observations
NASA Technical Reports Server (NTRS)
Chiu, J. Christine; Petty, Grant W.
2005-01-01
This paper presents a new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measurements Mission (TRMM) Microwave Imager (TMI) over the ocean, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes Theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance our understanding of theoretical benefits of the Bayesian approach, we have conducted sensitivity analyses based on two synthetic datasets for which the true conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak, due to saturation effects. It is also suggested that the choice of the estimators and the prior information are both crucial to the retrieval. In addition, the performance of our Bayesian algorithm is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.
ERIC Educational Resources Information Center
Zwick, Rebecca; Lenaburg, Lubella
2009-01-01
In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…
New KF-PP-SVM classification method for EEG in brain-computer interfaces.
Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian
2014-01-01
Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.
NASA Astrophysics Data System (ADS)
Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming
2013-03-01
The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.
Safari, Parviz; Danyali, Syyedeh Fatemeh; Rahimi, Mehdi
2018-06-02
Drought is the main abiotic stress seriously influencing wheat production. Information about the inheritance of drought tolerance is necessary to determine the most appropriate strategy to develop tolerant cultivars and populations. In this study, generation means analysis to identify the genetic effects controlling grain yield inheritance in water deficit and normal conditions was considered as a model selection problem in a Bayesian framework. Stochastic search variable selection (SSVS) was applied to identify the most important genetic effects and the best fitted models using different generations obtained from two crosses applying two water regimes in two growing seasons. The SSVS is used to evaluate the effect of each variable on the dependent variable via posterior variable inclusion probabilities. The model with the highest posterior probability is selected as the best model. In this study, the grain yield was controlled by the main effects (additive and non-additive effects) and epistatic. The results demonstrate that breeding methods such as recurrent selection and subsequent pedigree method and hybrid production can be useful to improve grain yield.
[Determination of wine original regions using information fusion of NIR and MIR spectroscopy].
Xiang, Ling-Li; Li, Meng-Hua; Li, Jing-Mingz; Li, Jun-Hui; Zhang, Lu-Da; Zhao, Long-Lian
2014-10-01
Geographical origins of wine grapes are significant factors affecting wine quality and wine prices. Tasters' evaluation is a good method but has some limitations. It is important to discriminate different wine original regions quickly and accurately. The present paper proposed a method to determine wine original regions based on Bayesian information fusion that fused near-infrared (NIR) transmission spectra information and mid-infrared (MIR) ATR spectra information of wines. This method improved the determination results by expanding the sources of analysis information. NIR spectra and MIR spectra of 153 wine samples from four different regions of grape growing were collected by near-infrared and mid-infrared Fourier transform spe trometer separately. These four different regions are Huailai, Yantai, Gansu and Changli, which areall typical geographical originals for Chinese wines. NIR and MIR discriminant models for wine regions were established using partial least squares discriminant analysis (PLS-DA) based on NIR spectra and MIR spectra separately. In PLS-DA, the regions of wine samples are presented in group of binary code. There are four wine regions in this paper, thereby using four nodes standing for categorical variables. The output nodes values for each sample in NIR and MIR models were normalized first. These values stand for the probabilities of each sample belonging to each category. They seemed as the input to the Bayesian discriminant formula as a priori probability value. The probabilities were substituteed into the Bayesian formula to get posterior probabilities, by which we can judge the new class characteristics of these samples. Considering the stability of PLS-DA models, all the wine samples were divided into calibration sets and validation sets randomly for ten times. The results of NIR and MIR discriminant models of four wine regions were as follows: the average accuracy rates of calibration sets were 78.21% (NIR) and 82.57% (MIR), and the average accuracy rates of validation sets were 82.50% (NIR) and 81.98% (MIR). After using the method proposed in this paper, the accuracy rates of calibration and validation changed to 87.11% and 90.87% separately, which all achieved better results of determination than individual spectroscopy. These results suggest that Bayesian information fusion of NIR and MIR spectra is feasible for fast identification of wine original regions.
Improving Conceptual Models Using AEM Data and Probability Distributions
NASA Astrophysics Data System (ADS)
Davis, A. C.; Munday, T. J.; Christensen, N. B.
2012-12-01
With emphasis being placed on uncertainty in groundwater modelling and prediction, coupled with questions concerning the value of geophysical methods in hydrogeology, it is important to ask meaningful questions of hydrogeophysical data and inversion results. For example, to characterise aquifers using electromagnetic (EM) data, we ask questions such as "Given that the electrical conductivity of aquifer 'A' is less than x, where is that aquifer elsewhere in the survey area?" The answer may be given by examining inversion models, selecting locations and layers that satisfy the condition 'conductivity <= x', and labelling them as aquifer 'A'. One difficulty with this approach is that the inversion model result often be considered to be the only model for the data. In reality it is just one image of the subsurface that, given the method and the regularisation imposed in the inversion, agrees with measured data within a given error bound. We have no idea whether the final model realised by the inversion satisfies the global minimum error, or whether it is simply in a local minimum. There is a distribution of inversion models that satisfy the error tolerance condition: the final model is not the only one, nor is it necessarily the correct one. AEM inversions are often linearised in the calculation of the parameter sensitivity: we rely on the second derivatives in the Taylor expansion, thus the minimum model has all layer parameters distributed about their mean parameter value with well-defined variance. We investigate the validity of the minimum model, and its uncertainty, by examining the full posterior covariance matrix. We ask questions of the minimum model, and answer them in a probabilistically. The simplest question we can pose is "What is the probability that all layer resistivity values are <= a cut-off value?" We can calculate through use of the erf or the erfc functions. The covariance values of the inversion become marginalised in the integration: only the main diagonal is used. Complications arise when we ask more specific questions, such as "What is the probability that the resistivity of layer 2 <= x, given that layer 1 <= y?" The probability then becomes conditional, calculation includes covariance terms, the integration is taken over many dimensions, and the cross-correlation of parameters becomes important. To illustrate, we examine the inversion results of a Tempest AEM survey over the Uley Basin aquifers in the Eyre Peninsula, South Australia. Key aquifers include the unconfined Bridgewater Formation that overlies the Uley and Wanilla Formations, which contain Tertiary clays and Tertiary sandstone. These Formations overlie weathered basement which define the lower bound of the Uley Basin aquifer systems. By correlating the conductivity of the sub-surface Formation types, we pose questions such as: "What is the probability-depth of the Bridgewater Formation in the Uley South Basin?", "What is the thickness of the Uley Formation?" and "What is the most probable depth to basement?" We use these questions to generate improved conceptual hydrogeological models of the Uley Basin in order to develop better estimates of aquifer extent and the available groundwater resource.
Quantitative Analyses of Pediatric Cervical Spine Ossification Patterns Using Computed Tomography
Yoganandan, Narayan; Pintar, Frank A.; Lew, Sean M.; Rao, Raj D.; Rangarajan, Nagarajan
2011-01-01
The objective of the present study was to quantify ossification processes of the human pediatric cervical spine. Computed tomography images were obtained from a high resolution scanner according to clinical protocols. Bone window images were used to identify the presence of the primary synchondroses of the atlas, axis, and C3 vertebrae in 101 children. Principles of logistic regression were used to determine probability distributions as a function of subject age for each synchondrosis for each vertebra. The mean and 95% upper and 95% lower confidence intervals are given for each dataset delineating probability curves. Posterior ossifications preceded bilateral anterior closures of the synchondroses in all vertebrae. However, ossifications occurred at different ages. Logistic regression results for closures of different synchondrosis indicated p-values of <0.001 for the atlas, ranging from 0.002 to <0.001 for the axis, and 0.021 to 0.005 for the C3 vertebra. Fifty percent probability of three, two, and one synchondroses occurred at 2.53, 6.97, and 7.57 years of age for the atlas; 3.59, 4.74, and 5.7 years of age for the axis; and 1.28, 2.22, and 3.17 years of age for the third cervical vertebrae, respectively. Ossifications occurring at different ages indicate non-uniform maturations of bone growth/strength. They provide an anatomical rationale to reexamine dummies, scaling processes, and injury metrics for improved understanding of pediatric neck injuries PMID:22105393
Xue, Zhe; Chen, Jia-Xu; Zhao, Yue; Medvar, Barbara
2017-01-01
A major challenge in physiology is to exploit the many large-scale data sets available from “-omic” studies to seek answers to key physiological questions. In previous studies, Bayes’ theorem has been used for this purpose. This approach requires a means to map continuously distributed experimental data to probabilities (likelihood values) to derive posterior probabilities from the combination of prior probabilities and new data. Here, we introduce the use of minimum Bayes’ factors for this purpose and illustrate the approach by addressing a physiological question, “Which deubiquitylating enzymes (DUBs) encoded by mammalian genomes are most likely to regulate plasma membrane transport processes in renal cortical collecting duct principal cells?” To do this, we have created a comprehensive online database of 110 DUBs present in the mammalian genome (https://hpcwebapps.cit.nih.gov/ESBL/Database/DUBs/). We used Bayes’ theorem to integrate available information from large-scale data sets derived from proteomic and transcriptomic studies of renal collecting duct cells to rank the 110 known DUBs with regard to likelihood of interacting with and regulating transport processes. The top-ranked DUBs were OTUB1, USP14, PSMD7, PSMD14, USP7, USP9X, OTUD4, USP10, and UCHL5. Among these USP7, USP9X, OTUD4, and USP10 are known to be involved in endosomal trafficking and have potential roles in endosomal recycling of plasma membrane proteins in the mammalian cortical collecting duct. PMID:28039431
Xue, Zhe; Chen, Jia-Xu; Zhao, Yue; Medvar, Barbara; Knepper, Mark A
2017-03-01
A major challenge in physiology is to exploit the many large-scale data sets available from "-omic" studies to seek answers to key physiological questions. In previous studies, Bayes' theorem has been used for this purpose. This approach requires a means to map continuously distributed experimental data to probabilities (likelihood values) to derive posterior probabilities from the combination of prior probabilities and new data. Here, we introduce the use of minimum Bayes' factors for this purpose and illustrate the approach by addressing a physiological question, "Which deubiquitylating enzymes (DUBs) encoded by mammalian genomes are most likely to regulate plasma membrane transport processes in renal cortical collecting duct principal cells?" To do this, we have created a comprehensive online database of 110 DUBs present in the mammalian genome (https://hpcwebapps.cit.nih.gov/ESBL/Database/DUBs/). We used Bayes' theorem to integrate available information from large-scale data sets derived from proteomic and transcriptomic studies of renal collecting duct cells to rank the 110 known DUBs with regard to likelihood of interacting with and regulating transport processes. The top-ranked DUBs were OTUB1, USP14, PSMD7, PSMD14, USP7, USP9X, OTUD4, USP10, and UCHL5. Among these USP7, USP9X, OTUD4, and USP10 are known to be involved in endosomal trafficking and have potential roles in endosomal recycling of plasma membrane proteins in the mammalian cortical collecting duct. Copyright © 2017 the American Physiological Society.
Declining functional connectivity and changing hub locations in Alzheimer's disease: an EEG study.
Engels, Marjolein M A; Stam, Cornelis J; van der Flier, Wiesje M; Scheltens, Philip; de Waal, Hanneke; van Straaten, Elisabeth C W
2015-08-20
EEG studies have shown that patients with Alzheimer's disease (AD) have weaker functional connectivity than controls, especially in higher frequency bands. Furthermore, active regions seem more prone to AD pathology. How functional connectivity is affected in AD subgroups of disease severity and how network hubs (highly connected brain areas) change is not known. We compared AD patients with different disease severity and controls in terms of functional connections, hub strength and hub location. We studied routine 21-channel resting-state electroencephalography (EEG) of 318 AD patients (divided into tertiles based on disease severity: mild, moderate and severe AD) and 133 age-matched controls. Functional connectivity between EEG channels was estimated with the Phase Lag Index (PLI). From the PLI-based connectivity matrix, the minimum spanning tree (MST) was derived. For each node (EEG channel) in the MST, the betweenness centrality (BC) was computed, a measure to quantify the relative importance of a node within the network. Then we derived color-coded head plots based on BC values and calculated the center of mass (the exact middle had x and y values of 0). A shifting of the hub locations was defined as a shift of the center of mass on the y-axis across groups. Multivariate general linear models with PLI or BC values as dependent variables and the groups as continuous variables were used in the five conventional frequency bands. We found that functional connectivity decreases with increasing disease severity in the alpha band. All, except for posterior, regions showed increasing BC values with increasing disease severity. The center of mass shifted from posterior to more anterior regions with increasing disease severity in the higher frequency bands, indicating a loss of relative functional importance of the posterior brain regions. In conclusion, we observed decreasing functional connectivity in the posterior regions, together with a shifted hub location from posterior to central regions with increasing AD severity. Relative hub strength decreases in posterior regions while other regions show a relative rise with increasing AD severity, which is in accordance with the activity-dependent degeneration theory. Our results indicate that hubs are disproportionally affected in AD.
Data analysis in emission tomography using emission-count posteriors
NASA Astrophysics Data System (ADS)
Sitek, Arkadiusz
2012-11-01
A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.
Functional compartmentalization of the human superficial masseter muscle.
Guzmán-Venegas, Rodrigo A; Biotti Picand, Jorge L; de la Rosa, Francisco J Berral
2015-01-01
Some muscles have demonstrated a differential recruitment of their motor units in relation to their location and the nature of the motor task performed; this involves functional compartmentalization. There is little evidence that demonstrates the presence of a compartmentalization of the superficial masseter muscle during biting. The aim of this study was to describe the topographic distribution of the activity of the superficial masseter (SM) muscle's motor units using high-density surface electromyography (EMGs) at different bite force levels. Twenty healthy natural dentate participants (men: 4; women: 16; age 20±2 years; mass: 60±12 kg, height: 163±7 cm) were selected from 316 volunteers and included in this study. Using a gnathodynamometer, bites from 20 to 100% maximum voluntary bite force (MVBF) were randomly requested. Using a two-dimensional grid (four columns, six electrodes) located on the dominant SM, EMGs in the anterior, middle-anterior, middle-posterior and posterior portions were simultaneously recorded. In bite ranges from 20 to 60% MVBF, the EMG activity was higher in the anterior than in the posterior portion (p-value = 0.001).The center of mass of the EMG activity was displaced towards the posterior part when bite force increased (p-value = 0.001). The topographic distribution of EMGs was more homogeneous at high levels of MVBF (p-value = 0.001). The results of this study show that the superficial masseter is organized into three functional compartments: an anterior, a middle and a posterior compartment. However, this compartmentalization is only seen at low levels of bite force (20-60% MVBF).
Functional Compartmentalization of the Human Superficial Masseter Muscle
Guzmán-Venegas, Rodrigo A.; Biotti Picand, Jorge L.; de la Rosa, Francisco J. Berral
2015-01-01
Some muscles have demonstrated a differential recruitment of their motor units in relation to their location and the nature of the motor task performed; this involves functional compartmentalization. There is little evidence that demonstrates the presence of a compartmentalization of the superficial masseter muscle during biting. The aim of this study was to describe the topographic distribution of the activity of the superficial masseter (SM) muscle’s motor units using high-density surface electromyography (EMGs) at different bite force levels. Twenty healthy natural dentate participants (men: 4; women: 16; age 20±2 years; mass: 60±12 kg, height: 163±7 cm) were selected from 316 volunteers and included in this study. Using a gnathodynamometer, bites from 20 to 100% maximum voluntary bite force (MVBF) were randomly requested. Using a two-dimensional grid (four columns, six electrodes) located on the dominant SM, EMGs in the anterior, middle-anterior, middle-posterior and posterior portions were simultaneously recorded. In bite ranges from 20 to 60% MVBF, the EMG activity was higher in the anterior than in the posterior portion (p-value = 0.001).The center of mass of the EMG activity was displaced towards the posterior part when bite force increased (p-value = 0.001). The topographic distribution of EMGs was more homogeneous at high levels of MVBF (p-value = 0.001). The results of this study show that the superficial masseter is organized into three functional compartments: an anterior, a middle and a posterior compartment. However, this compartmentalization is only seen at low levels of bite force (20–60% MVBF). PMID:25692977
General Metropolis-Hastings jump diffusions for automatic target recognition in infrared scenes
NASA Astrophysics Data System (ADS)
Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.
1997-04-01
To locate and recognize ground-based targets in forward- looking IR (FLIR) images, 3D faceted models with associated pose parameters are formulated to accommodate the variability found in FLIR imagery. Taking a Bayesian approach, scenes are simulated from the emissive characteristics of the CAD models and compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. To accommodate scenes with variable numbers of targets, the posterior distribution is defined over parameter vectors of varying dimension. An inference algorithm based on Metropolis-Hastings jump- diffusion processes empirically samples from the posterior distribution, generating configurations of templates and transformations that match the collected sensor data with high probability. The jumps accommodate the addition and deletion of targets and the estimation of target identities; diffusions refine the hypotheses by drifting along the gradient of the posterior distribution with respect to the orientation and position parameters. Previous results on jumps strategies analogous to the Metropolis acceptance/rejection algorithm, with proposals drawn from the prior and accepted based on the likelihood, are extended to encompass general Metropolis-Hastings proposal densities. In particular, the algorithm proposes moves by drawing from the posterior distribution over computationally tractible subsets of the parameter space. The algorithm is illustrated by an implementation on a Silicon Graphics Onyx/Reality Engine.
Reidenbach, M M
1995-01-01
The posterior cricothyroid ligament and its topographic relation to the inferior laryngeal nerve were studied in 54 human adult male and female larynges. Fourteen specimens were impregnated with curable polymers and cut into 600-800 microns sections along different planes. Forty formalin-fixed hemi-larynges were dissected and various measurements were made. The posterior cricothyroid ligament provides a dorsal strengthening for the joint capsule of the cricothyroid joint. Its fibers spread in a fan-like manner from a small area of origin at the cricoid cartilage to a more extended area of attachment at the inferior thyroid cornu. The ligament consists of one (7.5%) to four (12.5%), in most cases of three (45.0%) or two (35.0%), individual parts oriented from mediocranial to latero-caudal. The inferior laryngeal nerve courses immediately dorsal to the ligament. In 60% it is covered by fibers of the posterior cricoarytenoid muscle, in the remaining 40% it is not. In this latter topographic situation there is almost no soft tissue interposed between the nerve and the hypopharynx. Therefore, the nerve may be exposed to pressure forces exerted from dorsally. It may be pushed against the unyielding posterior cricothyroid ligament and suffer functional or structural impairment. Probably, this mechanism may explain some of the laryngeal nerve lesions described in the literature after insertion of gastric tubes.
Intraoperative CT in the assessment of posterior wall acetabular fracture stability.
Cunningham, Brian; Jackson, Kelly; Ortega, Gil
2014-04-01
Posterior wall acetabular fractures that involve 10% to 40% of the posterior wall may or may not require an open reduction and internal fixation. Dynamic stress examination of the acetabular fracture under fluoroscopy has been used as an intraoperative method to assess joint stability. The aim of this study was to demonstrate the value of intraoperative ISO computed tomography (CT) examination using the Siemens ISO-C imaging system (Siemens Corp, Malvern, Pennsylvania) in the assessment of posterior wall acetabular fracture stability during stress examination under anesthesia. In 5 posterior wall acetabular fractures, standard fluoroscopic images (including anteroposterior pelvis and Judet radiographs) with dynamic stress examinations were compared with the ISO-C CT imaging system to assess posterior wall fracture stability during stress examination. After review of standard intraoperative fluoroscopic images under dynamic stress examination, all 5 cases appeared to demonstrate posterior wall stability; however, when the intraoperative images from the ISO-C CT imaging system demonstrated that 1 case showed fracture instability of the posterior wall segment during stress examination, open reduction and internal fixation was performed. The use of intraoperative ISO CT imaging has shown an initial improvement in the surgeon's ability to assess the intraoperative stability of posterior wall acetabular fractures during stress examination when compared with standard fluoroscopic images. Copyright 2014, SLACK Incorporated.
Estimated Probability of a Cervical Spine Injury During an ISS Mission
NASA Technical Reports Server (NTRS)
Brooker, John E.; Weaver, Aaron S.; Myers, Jerry G.
2013-01-01
Introduction: The Integrated Medical Model (IMM) utilizes historical data, cohort data, and external simulations as input factors to provide estimates of crew health, resource utilization and mission outcomes. The Cervical Spine Injury Module (CSIM) is an external simulation designed to provide the IMM with parameter estimates for 1) a probability distribution function (PDF) of the incidence rate, 2) the mean incidence rate, and 3) the standard deviation associated with the mean resulting from injury/trauma of the neck. Methods: An injury mechanism based on an idealized low-velocity blunt impact to the superior posterior thorax of an ISS crewmember was used as the simulated mission environment. As a result of this impact, the cervical spine is inertially loaded from the mass of the head producing an extension-flexion motion deforming the soft tissues of the neck. A multibody biomechanical model was developed to estimate the kinematic and dynamic response of the head-neck system from a prescribed acceleration profile. Logistic regression was performed on a dataset containing AIS1 soft tissue neck injuries from rear-end automobile collisions with published Neck Injury Criterion values producing an injury transfer function (ITF). An injury event scenario (IES) was constructed such that crew 1 is moving through a primary or standard translation path transferring large volume equipment impacting stationary crew 2. The incidence rate for this IES was estimated from in-flight data and used to calculate the probability of occurrence. The uncertainty in the model input factors were estimated from representative datasets and expressed in terms of probability distributions. A Monte Carlo Method utilizing simple random sampling was employed to propagate both aleatory and epistemic uncertain factors. Scatterplots and partial correlation coefficients (PCC) were generated to determine input factor sensitivity. CSIM was developed in the SimMechanics/Simulink environment with a Monte Carlo wrapper (MATLAB) used to integrate the components of the module. Results: The probability of generating an AIS1 soft tissue neck injury from the extension/flexion motion induced by a low-velocity blunt impact to the superior posterior thorax was fitted with a lognormal PDF with mean 0.26409, standard deviation 0.11353, standard error of mean 0.00114, and 95% confidence interval [0.26186, 0.26631]. Combining the probability of an AIS1 injury with the probability of IES occurrence was fitted with a Johnson SI PDF with mean 0.02772, standard deviation 0.02012, standard error of mean 0.00020, and 95% confidence interval [0.02733, 0.02812]. The input factor sensitivity analysis in descending order was IES incidence rate, ITF regression coefficient 1, impactor initial velocity, ITF regression coefficient 2, and all others (equipment mass, crew 1 body mass, crew 2 body mass) insignificant. Verification and Validation (V&V): The IMM V&V, based upon NASA STD 7009, was implemented which included an assessment of the data sets used to build CSIM. The documentation maintained includes source code comments and a technical report. The software code and documentation is under Subversion configuration management. Kinematic validation was performed by comparing the biomechanical model output to established corridors.
Rosentritt, Martin; Heidtkamp, Felix; Hösl, Helmut; Hahnel, Sebastian; Preis, Verena
2016-03-01
Removable dentures with different denture teeth may provide different performance and resistance in implant and gingival situations, or anterior and posterior applications. Two situations of removable dentures were investigated: gingiva (flexible) and implant (rigid) bearing. For simulating the gingiva/jaw situation, the dentures were supported with flexible lining material. For the implant situation, implants (d = 4.1 mm) were screwed into polymethylenmethacrylate (PMMA) resin. Two commercial (Vita-Physiodens MRP, SR Vivodent/Orthotyp DCL) and two experimental materials (EXP1, EXP2) were investigated in anterior (A) and posterior (P) tooth locations. Chewing simulation was performed, and failures were analyzed (microscopy, SEM). Fracture strength of surviving dentures was determined. Only EXP1 revealed failures during chewing simulation. Failures varied between anterior and posterior locations, and between implant (P:4x; A:7x) or gingiva (P:1x; A:2x) situations. Kaplan-Meier log-rank test revealed significant differences for implant situations (p < 0.002), but not for gingiva bearing (p > 0.093). Fracture testing in the implant situation provided significantly highest values for EXP2 (1476.4 ± 532.2 N) in posterior location, and for DCL (1575.4 ± 264.4 N) and EXP2 (1797.0 ± 604.2 N) in anterior location. For gingival bearing, significantly highest values were found for DCL/P (2148.3 ± 836.3 N), and significantly lowest results for EXP1/A (308.2 ± 115.6 N). For EXP1 + EXP2 + Vita/P and for EXP1/A no significant differences were found between implant- or gingiva-supported situations. Anterior and posterior teeth showed different material-dependent in vitro performance, further influenced by implant/gingiva bearing. While an implant in anterior application increased fracture strength of two materials, it decreased fracture values of 3/4 of the materials in posterior application. Survival of denture teeth may be influenced by material, oral position, and bearing situation.
Modeling stream fish distributions using interval-censored detection times.
Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro
2016-08-01
Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.
Contrast statistics for foveated visual systems: fixation selection by minimizing contrast entropy
NASA Astrophysics Data System (ADS)
Raj, Raghu; Geisler, Wilson S.; Frazor, Robert A.; Bovik, Alan C.
2005-10-01
The human visual system combines a wide field of view with a high-resolution fovea and uses eye, head, and body movements to direct the fovea to potentially relevant locations in the visual scene. This strategy is sensible for a visual system with limited neural resources. However, for this strategy to be effective, the visual system needs sophisticated central mechanisms that efficiently exploit the varying spatial resolution of the retina. To gain insight into some of the design requirements of these central mechanisms, we have analyzed the effects of variable spatial resolution on local contrast in 300 calibrated natural images. Specifically, for each retinal eccentricity (which produces a certain effective level of blur), and for each value of local contrast observed at that eccentricity, we measured the probability distribution of the local contrast in the unblurred image. These conditional probability distributions can be regarded as posterior probability distributions for the ``true'' unblurred contrast, given an observed contrast at a given eccentricity. We find that these conditional probability distributions are adequately described by a few simple formulas. To explore how these statistics might be exploited by central perceptual mechanisms, we consider the task of selecting successive fixation points, where the goal on each fixation is to maximize total contrast information gained about the image (i.e., minimize total contrast uncertainty). We derive an entropy minimization algorithm and find that it performs optimally at reducing total contrast uncertainty and that it also works well at reducing the mean squared error between the original image and the image reconstructed from the multiple fixations. Our results show that measurements of local contrast alone could efficiently drive the scan paths of the eye when the goal is to gain as much information about the spatial structure of a scene as possible.
Wu, Shih-Wei; Delgado, Mauricio R.; Maloney, Laurence T.
2011-01-01
In decision under risk, people choose between lotteries that contain a list of potential outcomes paired with their probabilities of occurrence. We previously developed a method for translating such lotteries to mathematically equivalent motor lotteries. The probability of each outcome in a motor lottery is determined by the subject’s noise in executing a movement. In this study, we used functional magnetic resonance imaging in humans to compare the neural correlates of monetary outcome and probability in classical lottery tasks where information about probability was explicitly communicated to the subjects and in mathematically equivalent motor lottery tasks where probability was implicit in the subjects’ own motor noise. We found that activity in the medial prefrontal cortex (mPFC) and the posterior cingulate cortex (PCC) quantitatively represent the subjective utility of monetary outcome in both tasks. For probability, we found that the mPFC significantly tracked the distortion of such information in both tasks. Specifically, activity in mPFC represents probability information but not the physical properties of the stimuli correlated with this information. Together, the results demonstrate that mPFC represents probability from two distinct forms of decision under risk. PMID:21677166
Wu, Shih-Wei; Delgado, Mauricio R; Maloney, Laurence T
2011-06-15
In decision under risk, people choose between lotteries that contain a list of potential outcomes paired with their probabilities of occurrence. We previously developed a method for translating such lotteries to mathematically equivalent "motor lotteries." The probability of each outcome in a motor lottery is determined by the subject's noise in executing a movement. In this study, we used functional magnetic resonance imaging in humans to compare the neural correlates of monetary outcome and probability in classical lottery tasks in which information about probability was explicitly communicated to the subjects and in mathematically equivalent motor lottery tasks in which probability was implicit in the subjects' own motor noise. We found that activity in the medial prefrontal cortex (mPFC) and the posterior cingulate cortex quantitatively represent the subjective utility of monetary outcome in both tasks. For probability, we found that the mPFC significantly tracked the distortion of such information in both tasks. Specifically, activity in mPFC represents probability information but not the physical properties of the stimuli correlated with this information. Together, the results demonstrate that mPFC represents probability from two distinct forms of decision under risk.
CMB-galaxy correlation in Unified Dark Matter scalar field cosmologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertacca, Daniele; Bartolo, Nicola; Matarrese, Sabino
We present an analysis of the cross-correlation between the CMB and the large-scale structure (LSS) of the Universe in Unified Dark Matter (UDM) scalar field cosmologies. We work out the predicted cross-correlation function in UDM models, which depends on the speed of sound of the unified component, and compare it with observations from six galaxy catalogues (NVSS, HEAO, 2MASS, and SDSS main galaxies, luminous red galaxies, and quasars). We sample the value of the speed of sound and perform a likelihood analysis, finding that the UDM model is as likely as the ΛCDM, and is compatible with observations for amore » range of values of c{sub ∞} (the value of the sound speed at late times) on which structure formation depends. In particular, we obtain an upper bound of c{sub ∞}{sup 2} ≤ 0.009 at 95% confidence level, meaning that the ΛCDM model, for which c{sub ∞}{sup 2} = 0, is a good fit to the data, while the posterior probability distribution peaks at the value c{sub ∞}{sup 2} = 10{sup −4} . Finally, we study the time dependence of the deviation from ΛCDM via a tomographic analysis using a mock redshift distribution and we find that the largest deviation is for low-redshift sources, suggesting that future low-z surveys will be best suited to constrain UDM models.« less
Bayesian seismic tomography by parallel interacting Markov chains
NASA Astrophysics Data System (ADS)
Gesret, Alexandrine; Bottero, Alexis; Romary, Thomas; Noble, Mark; Desassis, Nicolas
2014-05-01
The velocity field estimated by first arrival traveltime tomography is commonly used as a starting point for further seismological, mineralogical, tectonic or similar analysis. In order to interpret quantitatively the results, the tomography uncertainty values as well as their spatial distribution are required. The estimated velocity model is obtained through inverse modeling by minimizing an objective function that compares observed and computed traveltimes. This step is often performed by gradient-based optimization algorithms. The major drawback of such local optimization schemes, beyond the possibility of being trapped in a local minimum, is that they do not account for the multiple possible solutions of the inverse problem. They are therefore unable to assess the uncertainties linked to the solution. Within a Bayesian (probabilistic) framework, solving the tomography inverse problem aims at estimating the posterior probability density function of velocity model using a global sampling algorithm. Markov chains Monte-Carlo (MCMC) methods are known to produce samples of virtually any distribution. In such a Bayesian inversion, the total number of simulations we can afford is highly related to the computational cost of the forward model. Although fast algorithms have been recently developed for computing first arrival traveltimes of seismic waves, the complete browsing of the posterior distribution of velocity model is hardly performed, especially when it is high dimensional and/or multimodal. In the latter case, the chain may even stay stuck in one of the modes. In order to improve the mixing properties of classical single MCMC, we propose to make interact several Markov chains at different temperatures. This method can make efficient use of large CPU clusters, without increasing the global computational cost with respect to classical MCMC and is therefore particularly suited for Bayesian inversion. The exchanges between the chains allow a precise sampling of the high probability zones of the model space while avoiding the chains to end stuck in a probability maximum. This approach supplies thus a robust way to analyze the tomography imaging uncertainties. The interacting MCMC approach is illustrated on two synthetic examples of tomography of calibration shots such as encountered in induced microseismic studies. On the second application, a wavelet based model parameterization is presented that allows to significantly reduce the dimension of the problem, making thus the algorithm efficient even for a complex velocity model.
An Empirical Bayes Approach to Spatial Analysis
NASA Technical Reports Server (NTRS)
Morris, C. N.; Kostal, H.
1983-01-01
Multi-channel LANDSAT data are collected in several passes over agricultural areas during the growing season. How empirical Bayes modeling can be used to develop crop identification and discrimination techniques that account for spatial correlation in such data is considered. The approach models the unobservable parameters and the data separately, hoping to take advantage of the fact that the bulk of spatial correlation lies in the parameter process. The problem is then framed in terms of estimating posterior probabilities of crop types for each spatial area. Some empirical Bayes spatial estimation methods are used to estimate the logits of these probabilities.
A new approach to counting measurements: Addressing the problems with ISO-11929
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klumpp, John Allan; Poudel, Deepesh; Miller, Guthrie
We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: “what is the probability distribution of the true amount in the sample, given the data?” The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the “measurement strength”more » that depends only on measurement-stage count quantities. Here, we show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an “action threshold” on the measurement strength which is similar to the decision threshold recommended by the current standard. Finally, we further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.« less
A new approach to counting measurements: Addressing the problems with ISO-11929
Klumpp, John Allan; Poudel, Deepesh; Miller, Guthrie
2017-12-23
We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: “what is the probability distribution of the true amount in the sample, given the data?” The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the “measurement strength”more » that depends only on measurement-stage count quantities. Here, we show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an “action threshold” on the measurement strength which is similar to the decision threshold recommended by the current standard. Finally, we further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.« less
Bayesian adaptive phase II screening design for combination trials
Cai, Chunyan; Yuan, Ying; Johnson, Valen E
2013-01-01
Background Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Methods Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Results Simulation studies show that the proposed design substantially outperforms the conventional multiarm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while allocating substantially more patients to efficacious treatments. Limitations The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. Conclusions The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while providing higher power to identify the best treatment at the end of the trial. PMID:23359875
A new approach to counting measurements: Addressing the problems with ISO-11929
NASA Astrophysics Data System (ADS)
Klumpp, John; Miller, Guthrie; Poudel, Deepesh
2018-06-01
We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: "what is the probability distribution of the true amount in the sample, given the data?" The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the "measurement strength" that depends only on measurement-stage count quantities. We show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an "action threshold" on the measurement strength which is similar to the decision threshold recommended by the current standard. We further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.
XID+: Next generation XID development
NASA Astrophysics Data System (ADS)
Hurley, Peter
2017-04-01
XID+ is a prior-based source extraction tool which carries out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. It uses a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates.
Evaluating the influential priority of the factors on insurance loss of public transit
Su, Yongmin; Chen, Xinqiang
2018-01-01
Understanding correlation between influential factors and insurance losses is beneficial for insurers to accurately price and modify the bonus-malus system. Although there have been a certain number of achievements in insurance losses and claims modeling, limited efforts focus on exploring the relative role of accidents characteristics in insurance losses. The primary objective of this study is to evaluate the influential priority of transit accidents attributes, such as the time, location and type of accidents. Based on the dataset from Washington State Transit Insurance Pool (WSTIP) in USA, we implement several key algorithms to achieve the objectives. First, K-means algorithm contributes to cluster the insurance loss data into 6 intervals; second, Grey Relational Analysis (GCA) model is applied to calculate grey relational grades of the influential factors in each interval; in addition, we implement Naive Bayes model to compute the posterior probability of factors values falling in each interval. The results show that the time, location and type of accidents significantly influence the insurance loss in the first five intervals, but their grey relational grades show no significantly difference. In the last interval which represents the highest insurance loss, the grey relational grade of the time is significant higher than that of the location and type of accidents. For each value of the time and location, the insurance loss most likely falls in the first and second intervals which refers to the lower loss. However, for accidents between buses and non-motorized road users, the probability of insurance loss falling in the interval 6 tends to be highest. PMID:29298337
Propagules of arbuscular mycorrhizal fungi in a secondary dry forest of Oaxaca, Mexico.
Guadarrama, Patricia; Castillo-Argüero, Silvia; Ramos-Zapata, José A; Camargo-Ricalde, Sara L; Alvarez-Sánchez, Javier
2008-03-01
Plant cover loss due to changes in land use promotes a decrease in spore diversity of arbuscular mycorrhizal fungi (AMF), viable mycelium and, therefore, in AMF colonization, this has an influence in community diversity and, as a consequence, in its recovery. To evaluate different AMF propagules, nine plots in a tropical dry forest with secondary vegetation were selected: 0, 1, 7, 10, 14, 18, 22, 25, and 27 years after abandonment in Nizanda, Oaxaca, Mexico. The secondary vegetation with different stages of development is a consequence of slash and burn agriculture, and posterior abandonment. Soil samples (six per plot) were collected and percentage of AMF field colonization, extrarradical mycelium, viable spore density, infectivity and most probable number (MPN) ofAMF propagules were quantified through a bioassay. Means for field colonization ranged between 40% and 70%, mean of total mycelium length was 15.7 +/- 1.88 mg(-1) dry soil, with significant differences between plots; however, more than 40% of extracted mycelium was not viable, between 60 and 456 spores in 100 g of dry soil were recorded, but more than 64% showed some kind of damage. Infectivity values fluctuated between 20% and 50%, while MPN showed a mean value of 85.42 +/- 44.17 propagules (100 g dry soil). We conclude that secondary communities generated by elimination of vegetation with agricultural purposes in a dry forest in Nizanda do not show elimination of propagules, probably as a consequence of the low input agriculture practices in this area, which may encourage natural regeneration.
Evaluating the influential priority of the factors on insurance loss of public transit.
Zhang, Wenhui; Su, Yongmin; Ke, Ruimin; Chen, Xinqiang
2018-01-01
Understanding correlation between influential factors and insurance losses is beneficial for insurers to accurately price and modify the bonus-malus system. Although there have been a certain number of achievements in insurance losses and claims modeling, limited efforts focus on exploring the relative role of accidents characteristics in insurance losses. The primary objective of this study is to evaluate the influential priority of transit accidents attributes, such as the time, location and type of accidents. Based on the dataset from Washington State Transit Insurance Pool (WSTIP) in USA, we implement several key algorithms to achieve the objectives. First, K-means algorithm contributes to cluster the insurance loss data into 6 intervals; second, Grey Relational Analysis (GCA) model is applied to calculate grey relational grades of the influential factors in each interval; in addition, we implement Naive Bayes model to compute the posterior probability of factors values falling in each interval. The results show that the time, location and type of accidents significantly influence the insurance loss in the first five intervals, but their grey relational grades show no significantly difference. In the last interval which represents the highest insurance loss, the grey relational grade of the time is significant higher than that of the location and type of accidents. For each value of the time and location, the insurance loss most likely falls in the first and second intervals which refers to the lower loss. However, for accidents between buses and non-motorized road users, the probability of insurance loss falling in the interval 6 tends to be highest.
Posterior capsule opacification after implantation of a hydrogel intraocular lens
Hayashi, K; Hayashi, H
2004-01-01
Aim: To compare the degree of posterior capsule opacification (PCO) in eyes with a hydrophilic hydrogel intraocular lens (IOL) with that in eyes with a hydrophobic acrylic IOL. Methods: Ninety five patients underwent a hydrogel IOL implantation in one eye and an acrylic IOL implantation in the opposite eye. The PCO value of these patients was measured using the Scheimpflug videophotography system at 1, 6, 12, 18, and 24 months postoperatively. The rate of neodymium:YAG (Nd:YAG) laser posterior capsulotomy and visual acuity were also evaluated. Results: The mean PCO value in the hydrogel group increased significantly (p<0.0001), while that in the acrylic group did not show significant change. The PCO value in the hydrogel group was significantly greater than that in the acrylic group throughout the follow up period. Kaplan-Meier survival analysis determined that the Nd:YAG capsulotomy rate in the hydrogel group was significantly higher than that in the acrylic group (p<0.0001). Mean visual acuity in the hydrogel group decreased significantly with time (p<0.0001), and became significantly worse than that in the acrylic group at 18 and 24 months postoperatively. Conclusion: Posterior capsule opacification in eyes with a hydrophilic hydrogel IOL is significantly more extensive than that in eyes with a hydrophobic acrylic IOL, and results in a significant impairment of visual acuity. PMID:14736768
NASA Astrophysics Data System (ADS)
Butler, M. L.; Rainford, L.; Last, J.; Brennan, P. C.
2009-02-01
Introduction The American Association of Medical Physicists is currently standardizing the exposure index (EI) value. Recent studies have questioned whether the EI value offered by manufacturers is optimal. This current work establishes optimum EIs for the antero-posterior (AP) projections of a pelvis and knee on a Carestream Health (Kodak) CR system and compares these with manufacturers recommended EI values from a patient dose and image quality perspective. Methodology Human cadavers were used to produce images of clinically relevant standards. Several exposures were taken to achieve various EI values and corresponding entrance surface doses (ESD) were measured using thermoluminescent dosimeters. Image quality was assessed by 5 experienced clinicians using anatomical criteria judged against a reference image. Visualization of image specific common abnormalities was also analyzed to establish diagnostic efficacy. Results A rise in ESD for both examinations, consistent with increasing EI was shown. Anatomic image quality was deemed to be acceptable at an EI of 1560 for the AP pelvis and 1590 for the AP knee. From manufacturers recommended values, a significant reduction in ESD (p=0.02) of 38% and 33% for the pelvis and knee respectively was noted. Initial pathological analysis suggests that diagnostic efficacy at lower EI values may be projection-specific. Conclusion The data in this study emphasize the need for clinical centres to consider establishing their own EI guidelines, and not necessarily relying on manufacturers recommendations. Normal and abnormal images must be used in this process.
Okazaki, Ken; Takayama, Yukihisa; Osaki, Kanji; Matsuo, Yoshio; Mizu-Uchi, Hideki; Hamai, Satoshi; Honda, Hiroshi; Iwamoto, Yukihide
2015-10-01
Prediction of the risk of osteoarthritis in asymptomatic active patients with an isolated injury of the posterior cruciate ligament (PCL) is difficult. T1ρ magnetic resonance imaging (MRI) enables the quantification of the proteoglycan content in the articular cartilage. The purpose of this study was to evaluate subclinical cartilage degeneration in asymptomatic young athletes with chronic PCL deficiency using T1ρ MRI. Six athletes with chronic PCL deficiency (median age 17, range 14-36 years) and six subjects without any history of knee injury (median age 31.5, range 24-33 years) were recruited. Regions of interest were placed on the articular cartilage of the tibia and the distal and posterior areas of the femoral condyle, and T1ρ values were calculated. On stress radiographs, the mean side-to-side difference in posterior laxity was 9.8 mm. The T1ρ values at the posterior area of the lateral femoral condyle and the superficial layer of the distal area of the medial and lateral femoral condyle of the patients were significantly increased compared with those of the normal controls (p < 0.05). At the tibial plateau, the T1ρ values in both the medial and lateral compartments were significantly higher in patients compared with those in the normal controls (p < 0.05). T1ρ MRI detected unexpected cartilage degeneration in the well-functioning PCL-deficient knees of young athletes. One should be alert to the possibility of subclinical cartilage degeneration even in asymptomatic patients who show no degenerative changes on plain radiographs or conventional MRI. IV.
Testing Small Variance Priors Using Prior-Posterior Predictive p Values.
Hoijtink, Herbert; van de Schoot, Rens
2017-04-03
Muthén and Asparouhov (2012) propose to evaluate model fit in structural equation models based on approximate (using small variance priors) instead of exact equality of (combinations of) parameters to zero. This is an important development that adequately addresses Cohen's (1994) The Earth is Round (p < .05), which stresses that point null-hypotheses are so precise that small and irrelevant differences from the null-hypothesis may lead to their rejection. It is tempting to evaluate small variance priors using readily available approaches like the posterior predictive p value and the DIC. However, as will be shown, both are not suited for the evaluation of models based on small variance priors. In this article, a well behaving alternative, the prior-posterior predictive p value, will be introduced. It will be shown that it is consistent, the distributions under the null and alternative hypotheses will be elaborated, and it will be applied to testing whether the difference between 2 means and the size of a correlation are relevantly different from zero. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
[Isolated severe neurologic disorders in post-partum: posterior reversible encephalopathy syndrome].
Wernet, A; Benayoun, L; Yver, C; Bruno, O; Mantz, J
2007-01-01
Just after Caesarean section for twin pregnancy and feto-pelvic dysproportion, a woman presented severe headaches and arterial hypertension, then blurred vision, then generalised seizures. There were no oedematous syndrome, proteinuria was negative, ASAT were 1.5 N and platelet count was 120,000/mm(3). Cerebral CT-scan was normal. Posterior reversible encephalopathy syndrome (PRES) was diagnosed on MRI. A second MRI performed at day 9 showed complete regression of cerebral lesions, while patient was taking anti-hypertensive and antiepileptic drugs. PRES has to be evoked in post-partum central neurological symptoms, even in absence of classical sign of pre-eclampsia, like proteinuria. PRES and eclampsia share probably common physiopathological pathways. There management and prognosis seems identical.
Prosthetic restoration in the single-tooth gap: patient preferences and analysis of the WTP index.
Augusti, Davide; Augusti, Gabriele; Re, Dino
2014-11-01
The objective of this study was to evaluate the preference of a patients' population, according to the index of willingness to pay (WTP), against two treatments to restore a single-tooth gap: the implant-supported crown (ISC) and the 3-unit fixed partial denture prosthesis (FPDP) on natural teeth. Willingness to pay values were recorded on 107 subjects by asking the WTP from a starting bid of €2000 modifiable through monetary increases or decreases (€100). Data were collected through an individually delivered questionnaire. The characteristics of the population and choices made, the median values and WTP associations with socio-demographic parameters (Mann-Whitney and Kruskal-Wallis tests), correlations between variables (chi-square test in contingency tables) and significant parameters for predicting WTP values obtained in a multiple linear regression model were revealed. The 64% of patients expressed a preference for ISC, while the remaining 36% of the population chose the FPDP. The current therapeutic choice and those carried out in the past were generally in agreement (>70% of cases, P = 0.0001); a relationship was discovered between the anterior and posterior area to the same method of rehabilitation (101 of 107 cases, 94.4%). The WTP median values for ISC were of €3000 and of €2500 in the anterior and posterior areas, respectively. The smallest amount of money has been allocated for FPDP in posterior region (median of €1500). The "importance of oral care" for the patient was a significant predictor, in the regression model analysis, for the estimation of both anterior (P = 0.0003) and posterior (P < 0.0001) WTP values. The "previous therapy" variable reached and was just close to significance in anterior (P = 0.0367) and posterior (P = 0.0511) analyses, respectively. Within the limitations of this study, most of the population (64%) surveyed indicated the ISC as a therapeutic solution for the replacement of a single missing tooth, showing a higher WTP index in the anterior area. Among investigated socio-demographic variables, the importance assigned by the patient to oral care appeared to influence WTP values of the rehabilitation, regardless the location of the single gap in the mouth. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Posterior Corneal Characteristics of Cataract Patients with High Myopia
Jing, Qinghe; Tang, Yating; Qian, Dongjin; Lu, Yi; Jiang, Yongxiang
2016-01-01
Purpose To evaluate the characteristics of the posterior corneal surface in patients with high myopia before cataract surgery. Methods We performed a cross-sectional study at the Eye and ENT Hospital of Fudan University, Shanghai, China. Corneal astigmatism and axial length were measured with a rotating Scheimpflug camera (Pentacam) and partial coherence interferometry (IOLMaster) in a high-myopia study group of 167 eyes (axial length ≥ 26 mm) and a control group of 150 eyes (axial length > 20 mm and < 25 mm). Results Total corneal astigmatism and anterior corneal astigmatism values were higher in the high-myopia group than in the control group. There was no significant difference in posterior corneal astigmatism between the high-myopia study group and the control group. In the study group, the mean posterior corneal astigmatism (range 0 – −0.9 diopters) was –0.29 diopters (D) ± 0.17 standard deviations (SD). The steep corneal meridian was aligned vertically (60°–120°) in 87.43% of eyes for the posterior corneal surface, and did not change with increasing age. There was a significant correlation (r = 0.235, p = 0.002) between posterior corneal astigmatism and anterior corneal astigmatism, especially when the anterior corneal surface showed with-the-rule (WTR) astigmatism (r = 0.452, p = 0.000). There was a weak negative correlation between posterior corneal astigmatism and age (r = –0.15, p = 0.053) in the high-myopia group. Compared with total corneal astigmatism values, the anterior corneal measurements alone overestimated WTR astigmatism by a mean of 0.27 ± 0.18 D in 68.75% of eyes, underestimated against-the-rule (ATR) astigmatism by a mean of 0.41 ± 0.28 D in 88.89% of eyes, and underestimated oblique astigmatism by a mean of 0.24 ± 0.13 D in 63.64% of eyes. Conclusions Posterior corneal astigmatism decreased with age and remained as ATR astigmatism in most cases of high myopia. There was a significant correlation between posterior corneal astigmatism and anterior corneal astigmatism when anterior corneal astigmatism was WTR. If posterior corneal astigmatism is not accounted for when selecting toric intraocular lenses for high-myopia patients, the use of anterior corneal astigmatism measurements alone will lead to overestimation of WTR astigmatism and underestimation of ATR and oblique astigmatism. PMID:27603713
Posterior Corneal Characteristics of Cataract Patients with High Myopia.
Jing, Qinghe; Tang, Yating; Qian, Dongjin; Lu, Yi; Jiang, Yongxiang
2016-01-01
To evaluate the characteristics of the posterior corneal surface in patients with high myopia before cataract surgery. We performed a cross-sectional study at the Eye and ENT Hospital of Fudan University, Shanghai, China. Corneal astigmatism and axial length were measured with a rotating Scheimpflug camera (Pentacam) and partial coherence interferometry (IOLMaster) in a high-myopia study group of 167 eyes (axial length ≥ 26 mm) and a control group of 150 eyes (axial length > 20 mm and < 25 mm). Total corneal astigmatism and anterior corneal astigmatism values were higher in the high-myopia group than in the control group. There was no significant difference in posterior corneal astigmatism between the high-myopia study group and the control group. In the study group, the mean posterior corneal astigmatism (range 0 - -0.9 diopters) was -0.29 diopters (D) ± 0.17 standard deviations (SD). The steep corneal meridian was aligned vertically (60°-120°) in 87.43% of eyes for the posterior corneal surface, and did not change with increasing age. There was a significant correlation (r = 0.235, p = 0.002) between posterior corneal astigmatism and anterior corneal astigmatism, especially when the anterior corneal surface showed with-the-rule (WTR) astigmatism (r = 0.452, p = 0.000). There was a weak negative correlation between posterior corneal astigmatism and age (r = -0.15, p = 0.053) in the high-myopia group. Compared with total corneal astigmatism values, the anterior corneal measurements alone overestimated WTR astigmatism by a mean of 0.27 ± 0.18 D in 68.75% of eyes, underestimated against-the-rule (ATR) astigmatism by a mean of 0.41 ± 0.28 D in 88.89% of eyes, and underestimated oblique astigmatism by a mean of 0.24 ± 0.13 D in 63.64% of eyes. Posterior corneal astigmatism decreased with age and remained as ATR astigmatism in most cases of high myopia. There was a significant correlation between posterior corneal astigmatism and anterior corneal astigmatism when anterior corneal astigmatism was WTR. If posterior corneal astigmatism is not accounted for when selecting toric intraocular lenses for high-myopia patients, the use of anterior corneal astigmatism measurements alone will lead to overestimation of WTR astigmatism and underestimation of ATR and oblique astigmatism.
The DNA database search controversy revisited: bridging the Bayesian-frequentist gap.
Storvik, Geir; Egeland, Thore
2007-09-01
Two different quantities have been suggested for quantification of evidence in cases where a suspect is found by a search through a database of DNA profiles. The likelihood ratio, typically motivated from a Bayesian setting, is preferred by most experts in the field. The so-called np rule has been suggested through frequentist arguments and has been suggested by the American National Research Council and Stockmarr (1999, Biometrics55, 671-677). The two quantities differ substantially and have given rise to the DNA database search controversy. Although several authors have criticized the different approaches, a full explanation of why these differences appear is still lacking. In this article we show that a P-value in a frequentist hypothesis setting is approximately equal to the result of the np rule. We argue, however, that a more reasonable procedure in this case is to use conditional testing, in which case a P-value directly related to posterior probabilities and the likelihood ratio is obtained. This way of viewing the problem bridges the gap between the Bayesian and frequentist approaches. At the same time it indicates that the np rule should not be used to quantify evidence.
Cole, Shelley A; Voruganti, V Saroja; Cai, Guowen; Haack, Karin; Kent, Jack W; Blangero, John; Comuzzie, Anthony G; McPherson, John D; Gibbs, Richard A
2010-01-01
Background: Melanocortin-4-receptor (MC4R) haploinsufficiency is the most common form of monogenic obesity; however, the frequency of MC4R variants and their functional effects in general populations remain uncertain. Objective: The aim was to identify and characterize the effects of MC4R variants in Hispanic children. Design: MC4R was resequenced in 376 parents, and the identified single nucleotide polymorphisms (SNPs) were genotyped in 613 parents and 1016 children from the Viva la Familia cohort. Measured genotype analysis (MGA) tested associations between SNPs and phenotypes. Bayesian quantitative trait nucleotide (BQTN) analysis was used to infer the most likely functional polymorphisms influencing obesity-related traits. Results: Seven rare SNPs in coding and 18 SNPs in flanking regions of MC4R were identified. MGA showed suggestive associations between MC4R variants and body size, adiposity, glucose, insulin, leptin, ghrelin, energy expenditure, physical activity, and food intake. BQTN analysis identified SNP 1704 in a predicted micro-RNA target sequence in the downstream flanking region of MC4R as a strong, probable functional variant influencing total, sedentary, and moderate activities with posterior probabilities of 1.0. SNP 2132 was identified as a variant with a high probability (1.0) of exerting a functional effect on total energy expenditure and sleeping metabolic rate. SNP rs34114122 was selected as having likely functional effects on the appetite hormone ghrelin, with a posterior probability of 0.81. Conclusion: This comprehensive investigation provides strong evidence that MC4R genetic variants are likely to play a functional role in the regulation of weight, not only through energy intake but through energy expenditure. PMID:19889825
Onai, Takayuki; Lin, Hsiu-Chin; Schubert, Michael; Koop, Demian; Osborne, Peter W; Alvarez, Susana; Alvarez, Rosana; Holland, Nicholas D; Holland, Linda Z
2009-08-15
A role for Wnt/beta-catenin signaling in axial patterning has been demonstrated in animals as basal as cnidarians, while roles in axial patterning for retinoic acid (RA) probably evolved in the deuterostomes and may be chordate-specific. In vertebrates, these two pathways interact both directly and indirectly. To investigate the evolutionary origins of interactions between these two pathways, we manipulated Wnt/beta-catenin and RA signaling in the basal chordate amphioxus during the gastrula stage, which is the RA-sensitive period for anterior/posterior (A/P) patterning. The results show that Wnt/beta-catenin and RA signaling have distinctly different roles in patterning the A/P axis of the amphioxus gastrula. Wnt/beta-catenin specifies the identity of the ends of the embryo (high Wnt = posterior; low Wnt = anterior) but not intervening positions. Thus, upregulation of Wnt/beta-catenin signaling induces ectopic expression of posterior markers at the anterior tip of the embryo. In contrast, RA specifies position along the A/P axis, but not the identity of the ends of the embryo-increased RA signaling strongly affects the domains of Hox expression along the A/P axis but has little or no effect on the expression of either anterior or posterior markers. Although the two pathways may both influence such things as specification of neuronal identity, interactions between them in A/P patterning appear to be minimal.
O'Reilly, Joseph E; Donoghue, Philip C J
2018-03-01
Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data.
O’Reilly, Joseph E; Donoghue, Philip C J
2018-01-01
Abstract Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data. PMID:29106675
NASA Astrophysics Data System (ADS)
Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.
2015-12-01
We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.
A probabilistic model framework for evaluating year-to-year variation in crop productivity
NASA Astrophysics Data System (ADS)
Yokozawa, M.; Iizumi, T.; Tao, F.
2008-12-01
Most models describing the relation between crop productivity and weather condition have so far been focused on mean changes of crop yield. For keeping stable food supply against abnormal weather as well as climate change, evaluating the year-to-year variations in crop productivity rather than the mean changes is more essential. We here propose a new framework of probabilistic model based on Bayesian inference and Monte Carlo simulation. As an example, we firstly introduce a model on paddy rice production in Japan. It is called PRYSBI (Process- based Regional rice Yield Simulator with Bayesian Inference; Iizumi et al., 2008). The model structure is the same as that of SIMRIW, which was developed and used widely in Japan. The model includes three sub- models describing phenological development, biomass accumulation and maturing of rice crop. These processes are formulated to include response nature of rice plant to weather condition. This model inherently was developed to predict rice growth and yield at plot paddy scale. We applied it to evaluate the large scale rice production with keeping the same model structure. Alternatively, we assumed the parameters as stochastic variables. In order to let the model catch up actual yield at larger scale, model parameters were determined based on agricultural statistical data of each prefecture of Japan together with weather data averaged over the region. The posterior probability distribution functions (PDFs) of parameters included in the model were obtained using Bayesian inference. The MCMC (Markov Chain Monte Carlo) algorithm was conducted to numerically solve the Bayesian theorem. For evaluating the year-to-year changes in rice growth/yield under this framework, we firstly iterate simulations with set of parameter values sampled from the estimated posterior PDF of each parameter and then take the ensemble mean weighted with the posterior PDFs. We will also present another example for maize productivity in China. The framework proposed here provides us information on uncertainties, possibilities and limitations on future improvements in crop model as well.
NASA Astrophysics Data System (ADS)
Pasyanos, Michael E.; Franz, Gregory A.; Ramirez, Abelardo L.
2006-03-01
In an effort to build seismic models that are the most consistent with multiple data sets we have applied a new probabilistic inverse technique. This method uses a Markov chain Monte Carlo (MCMC) algorithm to sample models from a prior distribution and test them against multiple data types to generate a posterior distribution. While computationally expensive, this approach has several advantages over deterministic models, notably the seamless reconciliation of different data types that constrain the model, the proper handling of both data and model uncertainties, and the ability to easily incorporate a variety of prior information, all in a straightforward, natural fashion. A real advantage of the technique is that it provides a more complete picture of the solution space. By mapping out the posterior probability density function, we can avoid simplistic assumptions about the model space and allow alternative solutions to be identified, compared, and ranked. Here we use this method to determine the crust and upper mantle structure of the Yellow Sea and Korean Peninsula region. The model is parameterized as a series of seven layers in a regular latitude-longitude grid, each of which is characterized by thickness and seismic parameters (Vp, Vs, and density). We use surface wave dispersion and body wave traveltime data to drive the model. We find that when properly tuned (i.e., the Markov chains have had adequate time to fully sample the model space and the inversion has converged), the technique behaves as expected. The posterior model reflects the prior information at the edge of the model where there is little or no data to constrain adjustments, but the range of acceptable models is significantly reduced in data-rich regions, producing values of sediment thickness, crustal thickness, and upper mantle velocities consistent with expectations based on knowledge of the regional tectonic setting.
NASA Astrophysics Data System (ADS)
Linde, N.; Vrugt, J. A.
2009-04-01
Geophysical models are increasingly used in hydrological simulations and inversions, where they are typically treated as an artificial data source with known uncorrelated "data errors". The model appraisal problem in classical deterministic linear and non-linear inversion approaches based on linearization is often addressed by calculating model resolution and model covariance matrices. These measures offer only a limited potential to assign a more appropriate "data covariance matrix" for future hydrological applications, simply because the regularization operators used to construct a stable inverse solution bear a strong imprint on such estimates and because the non-linearity of the geophysical inverse problem is not explored. We present a parallelized Markov Chain Monte Carlo (MCMC) scheme to efficiently derive the posterior spatially distributed radar slowness and water content between boreholes given first-arrival traveltimes. This method is called DiffeRential Evolution Adaptive Metropolis (DREAM_ZS) with snooker updater and sampling from past states. Our inverse scheme does not impose any smoothness on the final solution, and uses uniform prior ranges of the parameters. The posterior distribution of radar slowness is converted into spatially distributed soil moisture values using a petrophysical relationship. To benchmark the performance of DREAM_ZS, we first apply our inverse method to a synthetic two-dimensional infiltration experiment using 9421 traveltimes contaminated with Gaussian errors and 80 different model parameters, corresponding to a model discretization of 0.3 m × 0.3 m. After this, the method is applied to field data acquired in the vadose zone during snowmelt. This work demonstrates that fully non-linear stochastic inversion can be applied with few limiting assumptions to a range of common two-dimensional tomographic geophysical problems. The main advantage of DREAM_ZS is that it provides a full view of the posterior distribution of spatially distributed soil moisture, which is key to appropriately treat geophysical parameter uncertainty and infer hydrologic models.
Bayesian inference based on stationary Fokker-Planck sampling.
Berrones, Arturo
2010-06-01
A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.
Forecasts of non-Gaussian parameter spaces using Box-Cox transformations
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.
2011-09-01
Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.
Mollet, Pierre; Kery, Marc; Gardner, Beth; Pasinelli, Gilberto; Royle, Andy
2015-01-01
We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR) model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex). The SCR model yielded atotal population size estimate (posterior mean) of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130–147). The observed sex ratio was skewed towards males (0.63). The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54–0.61), suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less
Bayesian Travel Time Inversion adopting Gaussian Process Regression
NASA Astrophysics Data System (ADS)
Mauerberger, S.; Holschneider, M.
2017-12-01
A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.
Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D
2011-12-01
Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.
Characteristics of Chinese-English bilingual dyslexia in right occipito-temporal lesion.
Ting, Simon Kang Seng; Chia, Pei Shi; Chan, Yiong Huak; Kwek, Kevin Jun Hong; Tan, Wilnard; Hameed, Shahul; Tan, Eng-King
2017-11-01
Current literature suggests that right hemisphere lesions produce predominant spatial-related dyslexic error in English speakers. However, little is known regarding such lesions in Chinese speakers. In this paper, we describe the dyslexic characteristics of a Chinese-English bilingual patient with a right posterior cortical lesion. He was found to have profound spatial-related errors during his English word reading, in both real and non-words. During Chinese word reading, there was significantly less error compared to English, probably due to the ideographic nature of the Chinese language. He was also found to commit phonological-like visual errors in English, characterized by error responses that were visually similar to the actual word. There was no significant difference in visual errors during English word reading compared with Chinese. In general, our patient's performance in both languages appears to be consistent with the current literature on right posterior hemisphere lesions. Additionally, his performance also likely suggests that the right posterior cortical region participates in the visual analysis of orthographical word representation, both in ideographical and alphabetic languages, at least from a bilingual perspective. Future studies should further examine the role of the right posterior region in initial visual analysis of both languages. Copyright © 2017 Elsevier Ltd. All rights reserved.
Silicone intraocular lens surface calcification in a patient with asteroid hyalosis.
Matsumura, Kazuhiro; Takano, Masahiko; Shimizu, Kimiya; Nemoto, Noriko
2012-07-01
To confirm a substance presence on the posterior intraocular lens (IOL) surface in a patient with asteroid hyalosis. An 80-year-old man had IOLs for approximately 12 years. Opacities and neodymium-doped yttrium aluminum garnet pits were observed on the posterior surface of the right IOL. Asteroid hyalosis and an epiretinal membrane were observed OD. An IOL exchange was performed on 24 March 2008, and the explanted IOL was analyzed using a light microscope and a transmission electron microscope with a scanning electron micrograph and an energy-dispersive X-ray spectrometer for elemental analysis. To confirm asteroid hyalosis, asteroid bodies were examined with the ionic liquid (EtMeIm+ BF4-) method using a field emission scanning electron microscope (FE-SEM) with digital beam control RGB mapping. X-ray spectrometry of the deposits revealed high calcium and phosphorus peaks. Spectrometry revealed that the posterior IOL surface opacity was due to a calcium-phosphorus compound. Examination of the asteroid bodies using FE-SEM with digital beam control RGB mapping confirmed calcium and phosphorus as the main components. Calcium hydrogen phosphate dihydrate deposits were probably responsible for the posterior IOL surface opacity. Furthermore, analysis of the asteroid bodies demonstrated that calcium and phosphorus were its main components.
A bayesian analysis for identifying DNA copy number variations using a compound poisson process.
Chen, Jie; Yiğiter, Ayten; Wang, Yu-Ping; Deng, Hong-Wen
2010-01-01
To study chromosomal aberrations that may lead to cancer formation or genetic diseases, the array-based Comparative Genomic Hybridization (aCGH) technique is often used for detecting DNA copy number variants (CNVs). Various methods have been developed for gaining CNVs information based on aCGH data. However, most of these methods make use of the log-intensity ratios in aCGH data without taking advantage of other information such as the DNA probe (e.g., biomarker) positions/distances contained in the data. Motivated by the specific features of aCGH data, we developed a novel method that takes into account the estimation of a change point or locus of the CNV in aCGH data with its associated biomarker position on the chromosome using a compound Poisson process. We used a Bayesian approach to derive the posterior probability for the estimation of the CNV locus. To detect loci of multiple CNVs in the data, a sliding window process combined with our derived Bayesian posterior probability was proposed. To evaluate the performance of the method in the estimation of the CNV locus, we first performed simulation studies. Finally, we applied our approach to real data from aCGH experiments, demonstrating its applicability.
Bayesian methods for outliers detection in GNSS time series
NASA Astrophysics Data System (ADS)
Qianqian, Zhang; Qingming, Gui
2013-07-01
This article is concerned with the problem of detecting outliers in GNSS time series based on Bayesian statistical theory. Firstly, a new model is proposed to simultaneously detect different types of outliers based on the conception of introducing different types of classification variables corresponding to the different types of outliers; the problem of outlier detection is converted into the computation of the corresponding posterior probabilities, and the algorithm for computing the posterior probabilities based on standard Gibbs sampler is designed. Secondly, we analyze the reasons of masking and swamping about detecting patches of additive outliers intensively; an unmasking Bayesian method for detecting additive outlier patches is proposed based on an adaptive Gibbs sampler. Thirdly, the correctness of the theories and methods proposed above is illustrated by simulated data and then by analyzing real GNSS observations, such as cycle slips detection in carrier phase data. Examples illustrate that the Bayesian methods for outliers detection in GNSS time series proposed by this paper are not only capable of detecting isolated outliers but also capable of detecting additive outlier patches. Furthermore, it can be successfully used to process cycle slips in phase data, which solves the problem of small cycle slips.
Probabilistic graphs as a conceptual and computational tool in hydrology and water management
NASA Astrophysics Data System (ADS)
Schoups, Gerrit
2014-05-01
Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.
Nonlinear detection for a high rate extended binary phase shift keying system.
Chen, Xian-Qing; Wu, Le-Nan
2013-03-28
The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding.
Nonlinear Detection for a High Rate Extended Binary Phase Shift Keying System
Chen, Xian-Qing; Wu, Le-Nan
2013-01-01
The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding. PMID:23539034
Relevance Vector Machine Learning for Neonate Pain Intensity Assessment Using Digital Imaging
Gholami, Behnood; Tannenbaum, Allen R.
2011-01-01
Pain assessment in patients who are unable to verbally communicate is a challenging problem. The fundamental limitations in pain assessment in neonates stem from subjective assessment criteria, rather than quantifiable and measurable data. This often results in poor quality and inconsistent treatment of patient pain management. Recent advancements in pattern recognition techniques using relevance vector machine (RVM) learning techniques can assist medical staff in assessing pain by constantly monitoring the patient and providing the clinician with quantifiable data for pain management. The RVM classification technique is a Bayesian extension of the support vector machine (SVM) algorithm, which achieves comparable performance to SVM while providing posterior probabilities for class memberships and a sparser model. If classes represent “pure” facial expressions (i.e., extreme expressions that an observer can identify with a high degree of confidence), then the posterior probability of the membership of some intermediate facial expression to a class can provide an estimate of the intensity of such an expression. In this paper, we use the RVM classification technique to distinguish pain from nonpain in neonates as well as assess their pain intensity levels. We also correlate our results with the pain intensity assessed by expert and nonexpert human examiners. PMID:20172803
NASA Astrophysics Data System (ADS)
Pipień, M.
2008-09-01
We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.
The hippocampal longitudinal axis-relevance for underlying tau and TDP-43 pathology.
Lladó, Albert; Tort-Merino, Adrià; Sánchez-Valle, Raquel; Falgàs, Neus; Balasa, Mircea; Bosch, Beatriz; Castellví, Magda; Olives, Jaume; Antonell, Anna; Hornberger, Michael
2018-06-01
Recent studies suggest that hippocampus has different cortical connectivity and functionality along its longitudinal axis. We sought to elucidate the possible different pattern of atrophy in longitudinal axis of hippocampus between Amyloid/Tau pathology and TDP-43-pathies. Seventy-three presenile subjects were included: Amyloid/Tau group (33 Alzheimer's disease with confirmed cerebrospinal fluid [CSF] biomarkers), probable TDP-43 group (7 semantic variant progressive primary aphasia, 5 GRN and 2 C9orf72 mutation carriers) and 26 healthy controls. We conducted a region-of-interest voxel-based morphometry analysis on the hippocampal longitudinal axis, by contrasting the groups, covarying with CSF biomarkers (Aβ 42 , total tau, p-tau) and covarying with episodic memory scores. Amyloid/Tau pathology affected mainly posterior hippocampus while anterior left hippocampus was more atrophied in probable TDP-43-pathies. We also observed a significant correlation of posterior hippocampal atrophy with Alzheimer's disease CSF biomarkers and visual memory scores. Taken together, these data suggest that there is a potential differentiation along the hippocampal longitudinal axis based on the underlying pathology, which could be used as a potential biomarker to identify the underlying pathology in different neurodegenerative diseases. Copyright © 2018 Elsevier Inc. All rights reserved.
Posterior Predictive Bayesian Phylogenetic Model Selection
Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn
2014-01-01
We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892
Bayesian soft X-ray tomography using non-stationary Gaussian Processes
NASA Astrophysics Data System (ADS)
Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
Bayesian soft X-ray tomography using non-stationary Gaussian Processes.
Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
Exoplanet Biosignatures: A Framework for Their Assessment.
Catling, David C; Krissansen-Totton, Joshua; Kiang, Nancy Y; Crisp, David; Robinson, Tyler D; DasSarma, Shiladitya; Rushby, Andrew J; Del Genio, Anthony; Bains, William; Domagal-Goldman, Shawn
2018-04-20
Finding life on exoplanets from telescopic observations is an ultimate goal of exoplanet science. Life produces gases and other substances, such as pigments, which can have distinct spectral or photometric signatures. Whether or not life is found with future data must be expressed with probabilities, requiring a framework of biosignature assessment. We present a framework in which we advocate using biogeochemical "Exo-Earth System" models to simulate potential biosignatures in spectra or photometry. Given actual observations, simulations are used to find the Bayesian likelihoods of those data occurring for scenarios with and without life. The latter includes "false positives" wherein abiotic sources mimic biosignatures. Prior knowledge of factors influencing planetary inhabitation, including previous observations, is combined with the likelihoods to give the Bayesian posterior probability of life existing on a given exoplanet. Four components of observation and analysis are necessary. (1) Characterization of stellar (e.g., age and spectrum) and exoplanetary system properties, including "external" exoplanet parameters (e.g., mass and radius), to determine an exoplanet's suitability for life. (2) Characterization of "internal" exoplanet parameters (e.g., climate) to evaluate habitability. (3) Assessment of potential biosignatures within the environmental context (components 1-2), including corroborating evidence. (4) Exclusion of false positives. We propose that resulting posterior Bayesian probabilities of life's existence map to five confidence levels, ranging from "very likely" (90-100%) to "very unlikely" (<10%) inhabited. Key Words: Bayesian statistics-Biosignatures-Drake equation-Exoplanets-Habitability-Planetary science. Astrobiology 18, xxx-xxx.
Castien, René F; van der Windt, Daniëlle A W M; Blankenstein, Annette H; Heymans, Martijn W; Dekker, Joost
2012-04-01
The aims of this study were to describe the course of chronic tension-type headache (CTTH) in participants receiving manual therapy (MT), and to develop a prognostic model for predicting recovery in participants receiving MT. Outcomes in 145 adults with CTTH who received MT as participants in a previously published randomised clinical trial (n=41) or in a prospective cohort study (n=104) were evaluated. Assessments were made at baseline and at 8 and 26 weeks of follow-up. Recovery was defined as a 50% reduction in headache days in combination with a score of 'much improved' or 'very much improved' for global perceived improvement. Potential prognostic factors were analyzed by univariable and multivariable regression analysis. After 8 weeks 78% of the participants reported recovery after MT, and after 26 weeks the frequency of recovered participants was 73%. Prognostic factors related to recovery were co-existing migraine, absence of multiple-site pain, greater cervical range of motion and higher headache intensity. In participants classified as being likely to be recovered, the posterior probability for recovery at 8 weeks was 92%, whereas for those being classified at low probability of recovery this posterior probability was 61%. It is concluded that the course of CTTH is favourable in primary care patients receiving MT. The prognostic models provide additional information to improve prediction of outcome. Copyright © 2012 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
Li, Longhai; Feng, Cindy X; Qiu, Shi
2017-06-30
An important statistical task in disease mapping problems is to identify divergent regions with unusually high or low risk of disease. Leave-one-out cross-validatory (LOOCV) model assessment is the gold standard for estimating predictive p-values that can flag such divergent regions. However, actual LOOCV is time-consuming because one needs to rerun a Markov chain Monte Carlo analysis for each posterior distribution in which an observation is held out as a test case. This paper introduces a new method, called integrated importance sampling (iIS), for estimating LOOCV predictive p-values with only Markov chain samples drawn from the posterior based on a full data set. The key step in iIS is that we integrate away the latent variables associated the test observation with respect to their conditional distribution without reference to the actual observation. By following the general theory for importance sampling, the formula used by iIS can be proved to be equivalent to the LOOCV predictive p-value. We compare iIS and other three existing methods in the literature with two disease mapping datasets. Our empirical results show that the predictive p-values estimated with iIS are almost identical to the predictive p-values estimated with actual LOOCV and outperform those given by the existing three methods, namely, the posterior predictive checking, the ordinary importance sampling, and the ghosting method by Marshall and Spiegelhalter (2003). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Marshall, Tom R; den Boer, Sebastiaan; Cools, Roshan; Jensen, Ole; Fallon, Sean James; Zumer, Johanna M
2018-01-01
Selective attention is reflected neurally in changes in the power of posterior neural oscillations in the alpha (8-12 Hz) and gamma (40-100 Hz) bands. Although a neural mechanism that allows relevant information to be selectively processed has its advantages, it may lead to lucrative or dangerous information going unnoticed. Neural systems are also in place for processing rewarding and punishing information. Here, we examine the interaction between selective attention (left vs. right) and stimulus's learned value associations (neutral, punished, or rewarded) and how they compete for control of posterior neural oscillations. We found that both attention and stimulus-value associations influenced neural oscillations. Whereas selective attention had comparable effects on alpha and gamma oscillations, value associations had dissociable effects on these neural markers of attention. Salient targets (associated with positive and negative outcomes) hijacked changes in alpha power-increasing hemispheric alpha lateralization when salient targets were attended, decreasing it when they were being ignored. In contrast, hemispheric gamma-band lateralization was specifically abolished by negative distractors. Source analysis indicated occipital generators of both attentional and value effects. Thus, posterior cortical oscillations support both the ability to selectively attend while at the same time retaining the ability to remain sensitive to valuable features in the environment. Moreover, the versatility of our attentional system to respond separately to salient from merely positively valued stimuli appears to be carried out by separate neural processes reflected in different frequency bands.
Xiong, Ying; Li, Jing; Wang, Ningli; Liu, Xue; Wang, Zhao; Tsai, Frank F; Wan, Xiuhua
2017-01-01
To determine corneal Q value and its related factors in Chinese subjects older than 30 years. Cross sectional study. 1,683 participants (1,683 eyes) from the Handan Eye Study were involved, including 955 female and 728 male with average age of 53.64 years old (range from 30 to 107 years). The corneal Q values of anterior and posterior surfaces were measured at 3.0, 5.0 and 7.0mm aperture diameters using Bausch & Lomb Orbscan IIz (software version 3.12). Age, gender and refractive power were recorded. The average Q values of the anterior surface at 3.0, 5.0 and 7.0mm aperture diameters were -0.28±0.18, -0.28±0.18, and -0.29±0.18, respectively. The average Q value of the anterior surface at the 5.0mm aperture diameter was negatively correlated with age (B = -0.003, p<0.01) and the refractive power (B = -0.013, p = 0.016). The average Q values of the posterior surface at 3.0, 5.0, and 7.0mm were -0.26±0.216, -0.26±0.214, and -0.26±0.215, respectively. The average Q value of the posterior surface at the 5.0mm aperture diameter was positively correlated with age (B = 0.002, p = 0.036) and the refractive power (B = 0.016, p = 0.043). The corneal Q value of the elderly Chinese subjects is different from that of previously reported European and American subjects, and the Q value appears to be correlated with age and refractive power.
Robust Bayesian Algorithm for Targeted Compound Screening in Forensic Toxicology.
Woldegebriel, Michael; Gonsalves, John; van Asten, Arian; Vivó-Truyols, Gabriel
2016-02-16
As part of forensic toxicological investigation of cases involving unexpected death of an individual, targeted or untargeted xenobiotic screening of post-mortem samples is normally conducted. To this end, liquid chromatography (LC) coupled to high-resolution mass spectrometry (MS) is typically employed. For data analysis, almost all commonly applied algorithms are threshold-based (frequentist). These algorithms examine the value of a certain measurement (e.g., peak height) to decide whether a certain xenobiotic of interest (XOI) is present/absent, yielding a binary output. Frequentist methods pose a problem when several sources of information [e.g., shape of the chromatographic peak, isotopic distribution, estimated mass-to-charge ratio (m/z), adduct, etc.] need to be combined, requiring the approach to make arbitrary decisions at substep levels of data analysis. We hereby introduce a novel Bayesian probabilistic algorithm for toxicological screening. The method tackles the problem with a different strategy. It is not aimed at reaching a final conclusion regarding the presence of the XOI, but it estimates its probability. The algorithm effectively and efficiently combines all possible pieces of evidence from the chromatogram and calculates the posterior probability of the presence/absence of XOI features. This way, the model can accommodate more information by updating the probability if extra evidence is acquired. The final probabilistic result assists the end user to make a final decision with respect to the presence/absence of the xenobiotic. The Bayesian method was validated and found to perform better (in terms of false positives and false negatives) than the vendor-supplied software package.
Rolle, Teresa; Manerba, Linda; Lanzafame, Pietro; Grignolo, Federico M
2016-05-01
To evaluate the diagnostic power of the Posterior Pole Asymmetry Analysis (PPAA) from the SPECTRALIS OCT in glaucoma diagnosis and to define the correlation between the visual field sensitivity (VFS) and macular retinal thickness (MRT). 90 consecutive open-angle glaucoma patients and 23 healthy subjects were enrolled. All subjects underwent Visual Field test (Humphrey Field Analyzer, central 24-2 SITA-Standard) and SD-OCT volume scans (SPECTRALIS, Posterior Pole Asymmetry Analysis). The areas under the Receiving Operating Characteristic curve (AROC) were calculated to assess discriminating power for glaucoma, at first considering total MRT values and hemisphere MRT value and then quadrant MRT values from 16 square cells in a 8 x 8 posterior pole retinal thickness map that were averaged for a mean retinal thickness value. Structure function correlation was performed for total values, hemisphere values and for each quadrant compared to the matching central test points of the VF. The AROCs ranged from 0.70 to 0.82 (p < 0.0001), with no significant differences between each other. The highest AROC observed was in inferior nasal quadrant. The VFS showed a strong correlation only with the corresponding MRT value s for quadrant analysis: Superior Temporal (r = 0.33, p = 0.0013), Superior Nasal (r = 0.43, p < 0.0001), Inferior Temporal (r = 0.57, p < 0.0001) and Inferior Nasal (r = 0.55, p < 0.0001). the quadrant analysis showed statistically significant structure-function correlations and may provide additional data for the diagnostic performance of SPECTRALIS OCT.
Yu, Huajie; He, Danqing; Qiu, Lixin
2017-12-01
Maturation of the grafted volume after lateral sinus elevation is crucial for the long-term survival of dental implants. To compare endo-sinus histomorphometric bone formation between the solo- and two-window maxillary sinus augmentation techniques with or without membrane coverage for the rehabilitation of multiple missing posterior teeth. Patients with severely atrophic posterior maxillae were randomized to receive lateral sinus floor elevation via the solo-window technique with membrane coverage (Control Group) or the two-window technique without coverage (Test Group). Six months after surgery, bone core specimens harvested from the lateral aspect were histomorphometrically analyzed. Ten patients in each group underwent 21 maxillary sinus augmentations. Histomorphometric analysis revealed mean newly formed bone values of 26.08 ± 16.23% and 27.14 ± 18.11%, mean connective tissue values of 59.34 ± 12.42% and 50.03 ± 17.13%, and mean residual graft material values of 14.6 ± 14.56% and 22.78 ± 10.83% in the Test and Control Groups, respectively, with no significant differences. The two-window technique obtained comparative maturation of the grafted volume even without membrane coverage, and is a viable alternative for the rehabilitation of severely atrophic posterior maxillae with multiple missing posterior teeth. © 2017 Wiley Periodicals, Inc.
Amato, Domenico; Lombardo, Marco; Oddone, Francesco; Nubile, Mario; Colabelli Gisoldi, Rossella A M; Villani, Carlo M; Yoo, Sonia; Parel, Jean-Marie; Pocobelli, Augusto
2011-04-01
To preliminarily evaluate the repeatability of central corneal thickness (CCT) measurements performed with Anterior Segment Optical Coherence Tomography (AS-OCT) on eye bank posterior corneal lenticules. Six donor lenticules were created with a 350 μm head microkeratome (Moria, Antony, France). All donor tissues were stored at 4°C in Eusol-C solution (Alchimia S.r.l, Ponte S. Nicolò, Italy), without the anterior cornea lamella. The CCT of each lenticule, maintained in the glass phial, was measured using a commercial AS-OCT instrument (Visante, Carl Zeiss Meditec, Dublin, California, USA) and a specially designed adaptor immediately and 4, 24 and 48 hours after dissection. Immediately after AS-OCT, CCT values were measured with the ultrasound pachymetry method used at the Eye Bank. The mean donor cornea central thickness was 647±36 μm and 660 ± 38 μm (p=0.001) as measured by AS-OCT and ultrasound, respectively; immediately after dissection, CCT values of posterior lenticules were 235 ± 43 μm and 248 ± 44 μm, respectively (p=0.001). No statistically significant changes in CCT values of donor lenticules were assessed over the 48 h period with both methods. There was a high level of agreement, evidenced by Bland-Altman analysis, between the two methods of pachymetry. AS-OCT, with the corneal tissue in the vial, was revealed to be a repeatable and reliable method for measuring posterior donor lenticule central thickness. Lenticule CCT values measured with the investigational AS-OCT method were on average 10 μm thinner than those measured with the established ultrasound method.
State-space modeling to support management of brucellosis in the Yellowstone bison population
Hobbs, N. Thompson; Geremia, Chris; Treanor, John; Wallen, Rick; White, P.J.; Hooten, Mevin B.; Rhyan, Jack C.
2015-01-01
The bison (Bison bison) of the Yellowstone ecosystem, USA, exemplify the difficulty of conserving large mammals that migrate across the boundaries of conservation areas. Bison are infected with brucellosis (Brucella abortus) and their seasonal movements can expose livestock to infection. Yellowstone National Park has embarked on a program of adaptive management of bison, which requires a model that assimilates data to support management decisions. We constructed a Bayesian state-space model to reveal the influence of brucellosis on the Yellowstone bison population. A frequency-dependent model of brucellosis transmission was superior to a density-dependent model in predicting out-of-sample observations of horizontal transmission probability. A mixture model including both transmission mechanisms converged on frequency dependence. Conditional on the frequency-dependent model, brucellosis median transmission rate was 1.87 yr−1. The median of the posterior distribution of the basic reproductive ratio (R0) was 1.75. Seroprevalence of adult females varied around 60% over two decades, but only 9.6 of 100 adult females were infectious. Brucellosis depressed recruitment; estimated population growth rate λ averaged 1.07 for an infected population and 1.11 for a healthy population. We used five-year forecasting to evaluate the ability of different actions to meet management goals relative to no action. Annually removing 200 seropositive female bison increased by 30-fold the probability of reducing seroprevalence below 40% and increased by a factor of 120 the probability of achieving a 50% reduction in transmission probability relative to no action. Annually vaccinating 200 seronegative animals increased the likelihood of a 50% reduction in transmission probability by fivefold over no action. However, including uncertainty in the ability to implement management by representing stochastic variation in the number of accessible bison dramatically reduced the probability of achieving goals using interventions relative to no action. Because the width of the posterior predictive distributions of future population states expands rapidly with increases in the forecast horizon, managers must accept high levels of uncertainty. These findings emphasize the necessity of iterative, adaptive management with relatively short-term commitment to action and frequent reevaluation in response to new data and model forecasts. We believe our approach has broad applications.
Quantitative evaluation of lumbar intervertebral disc degeneration by axial T2* mapping.
Huang, Leitao; Liu, Yuan; Ding, Yi; Wu, Xia; Zhang, Ning; Lai, Qi; Zeng, Xianjun; Wan, Zongmiao; Dai, Min; Zhang, Bin
2017-12-01
To quantitatively evaluate the clinical value and demonstrate the potential benefits of biochemical axial T2* mapping-based grading of early stages of degenerative disc disease (DDD) using 3.0-T magnetic resonance imaging (MRI) in a clinical setting.Fifty patients with low back pain and 20 healthy volunteers (control) underwent standard MRI protocols including axial T2* mapping. All the intervertebral discs (IVDs) were classified morphologically. Lumbar IVDs were graded using Pfirrmann score (I to IV). The T2* values of the anterior annulus fibrosus (AF), posterior AF, and nucleus pulposus (NP) of each lumbar IVD were measured. The differences between groups were analyzed regarding specific T2* pattern at different regions of interest.The T2* values of the NP and posterior AF in the patient group were significantly lower than those in the control group (P < .01). The T2* value of the anterior AF was not significantly different between the patients and the controls (P > .05). The mean T2*values of the lumbar IVD in the patient group were significantly lower, especially the posterior AF, followed by the NP, and finally, the anterior AF. In the anterior AF, comparison of grade I with grade III and grade I with grade IV showed statistically significant differences (P = .07 and P = .08, respectively). Similarly, in the NP, comparison of grade I with grade III, grade I with grade IV, grade II with grade III, and grade II with grade IV showed statistically significant differences (P < .001). In the posterior AF, comparison of grade II with grade IV showed a statistically significant difference (P = .032). T2 values decreased linearly with increasing degeneration based on the Pfirrmann scoring system (ρ < -0.5, P < .001).Changes in the T2* value can signify early degenerative IVD diseases. Hence, T2* mapping can be used as a diagnostic tool for quantitative assessment of IVD degeneration. Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. All rights reserved.
Quantitative evaluation of lumbar intervertebral disc degeneration by axial T2∗ mapping
Huang, Leitao; Liu, Yuan; Ding, Yi; Wu, Xia; Zhang, Ning; Lai, Qi; Zeng, Xianjun; Wan, Zongmiao; Dai, Min; Zhang, Bin
2017-01-01
Abstract To quantitatively evaluate the clinical value and demonstrate the potential benefits of biochemical axial T2∗ mapping-based grading of early stages of degenerative disc disease (DDD) using 3.0-T magnetic resonance imaging (MRI) in a clinical setting. Fifty patients with low back pain and 20 healthy volunteers (control) underwent standard MRI protocols including axial T2∗ mapping. All the intervertebral discs (IVDs) were classified morphologically. Lumbar IVDs were graded using Pfirrmann score (I to IV). The T2∗ values of the anterior annulus fibrosus (AF), posterior AF, and nucleus pulposus (NP) of each lumbar IVD were measured. The differences between groups were analyzed regarding specific T2∗ pattern at different regions of interest. The T2∗ values of the NP and posterior AF in the patient group were significantly lower than those in the control group (P < .01). The T2∗ value of the anterior AF was not significantly different between the patients and the controls (P > .05). The mean T2∗values of the lumbar IVD in the patient group were significantly lower, especially the posterior AF, followed by the NP, and finally, the anterior AF. In the anterior AF, comparison of grade I with grade III and grade I with grade IV showed statistically significant differences (P = .07 and P = .08, respectively). Similarly, in the NP, comparison of grade I with grade III, grade I with grade IV, grade II with grade III, and grade II with grade IV showed statistically significant differences (P < .001). In the posterior AF, comparison of grade II with grade IV showed a statistically significant difference (P = .032). T2∗ values decreased linearly with increasing degeneration based on the Pfirrmann scoring system (ρ < −0.5, P < .001). Changes in the T2∗ value can signify early degenerative IVD diseases. Hence, T2∗ mapping can be used as a diagnostic tool for quantitative assessment of IVD degeneration. PMID:29390547
Disentangling neural representations of value and salience in the human brain
Kahnt, Thorsten; Park, Soyoung Q; Haynes, John-Dylan; Tobler, Philippe N.
2014-01-01
A large body of evidence has implicated the posterior parietal and orbitofrontal cortex in the processing of value. However, value correlates perfectly with salience when appetitive stimuli are investigated in isolation. Accordingly, considerable uncertainty has remained about the precise nature of the previously identified signals. In particular, recent evidence suggests that neurons in the primate parietal cortex signal salience instead of value. To investigate neural signatures of value and salience, here we apply multivariate (pattern-based) analyses to human functional MRI data acquired during a noninstrumental outcome-prediction task involving appetitive and aversive outcomes. Reaction time data indicated additive and independent effects of value and salience. Critically, we show that multivoxel ensemble activity in the posterior parietal cortex encodes predicted value and salience in superior and inferior compartments, respectively. These findings reinforce the earlier reports of parietal value signals and reconcile them with the recent salience report. Moreover, we find that multivoxel patterns in the orbitofrontal cortex correlate with value. Importantly, the patterns coding for the predicted value of appetitive and aversive outcomes are similar, indicating a common neural scale for appetite and aversive values in the orbitofrontal cortex. Thus orbitofrontal activity patterns satisfy a basic requirement for a neural value signal. PMID:24639493
Reid, S; Lu, C; Hardy, N; Casikar, I; Reid, G; Cario, G; Chou, D; Almashat, D; Condous, G
2014-12-01
To use office gel sonovaginography (SVG) to predict posterior deep infiltrating endometriosis (DIE) in women undergoing laparoscopy. This was a multicenter prospective observational study carried out between January 2009 and February 2013. All women were of reproductive age, had a history of chronic pelvic pain and underwent office gel SVG assessment for the prediction of posterior compartment DIE prior to laparoscopic endometriosis surgery. Gel SVG findings were compared with laparoscopic findings to determine the diagnostic accuracy of office gel SVG for the prediction of posterior compartment DIE. In total, 189 women underwent preoperative gel SVG and laparoscopy for endometriosis. At laparoscopy, 57 (30%) women had posterior DIE and 43 (23%) had rectosigmoid/anterior rectal DIE. For the prediction of rectosigmoid/anterior rectal (i.e. bowel) DIE, gel SVG had an accuracy of 92%, sensitivity of 88%, specificity of 93%, positive predictive value (PPV) of 79%, negative predictive value (NPV) of 97%, positive likelihood ratio (LR+) of 12.9 and negative likelihood ratio (LR-) of 0.12 (P = 3.98E-25); for posterior vaginal wall and rectovaginal septum (RVS) DIE, respectively, the accuracy was 95% and 95%, sensitivity was 18% and 18%, specificity was 99% and 100%, PPV was 67% and 100%, NPV was 95% and 95%, LR+ was 32.4 and infinity and LR- was 0.82 and 0.82 (P = 0.009 and P = 0.003). Office gel SVG appears to be an effective outpatient imaging technique for the prediction of bowel DIE, with a higher accuracy for the prediction of rectosigmoid compared with anterior rectal DIE. Although the sensitivity for vaginal and RVS DIE was limited, gel SVG had a high specificity and NPV for all forms of posterior DIE, indicating that a negative gel SVG examination is highly suggestive of the absence of DIE at laparoscopy. Copyright © 2014 ISUOG. Published by John Wiley & Sons Ltd.
Profiles of muscularity in junior Olympic weight lifters.
Kanehisa, H; Funato, K; Abe, T; Fukunaga, T
2005-03-01
This study aimed to investigate the muscularity of strength-trained junior athletes. Muscle thickness (Mt) values at 10 sites (anterior forearm, anterior upper arm, posterior upper arm, chest, abdomen, back, anterior thigh, posterior thigh, anterior lower leg, and posterior lower leg) were determined in junior Olympic weight lifters (OWL, n=7, 15.1+/-0.3 y, mean+/-SD) and non-athletes (CON, n=13, 15.1+/-0.3 y) using a brightness mode ultrasonography. Skeletal age assessed with the Tanner-Whitehouse II method (20 hand-wrist bones) was similar in OWL (16.4+/-0.7 y) and CON (16.3+/-0.6 y). At the 6 sites (anterior forearm, anterior upper arm, posterior upper arm, chest, back and anterior thigh), OWL showed significantly greater Mt values than CON even in terms of Mt relative to body mass(1/3) Mt x BM(-1/3). On the other hand, there were no significant differences between the 2 groups in the Mt ratios of the anterior to posterior site in the upper arm, thigh and lower leg and those of the back to either the chest or abdomen in the trunk. For OWL only, skeletal age was significantly correlated to Mt x BM(-1/3) at the abdomen (r=0.869, p<0.05) and anterior thigh (r=0.883, p<0.05). The findings here indicate that 1) as compared to adolescent non-athletes, junior Olympic weight lifters show a greater muscularity in the upper body and anterior thigh without predominant development in either of anterior and posterior sites within the same body segment, 2) for junior Olympic weight lifters, the muscularity of abdominal and knee extensor muscles is influenced by the biological maturation.
Liu, Jie; Zhuang, Xiahai; Wu, Lianming; An, Dongaolei; Xu, Jianrong; Peters, Terry; Gu, Lixu
2017-11-01
Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients. Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients.
Technical note: Bayesian calibration of dynamic ruminant nutrition models.
Reed, K F; Arhonditsis, G B; France, J; Kebreab, E
2016-08-01
Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Shavit Grievink, Liat; Penny, David; Holland, Barbara R.
2013-01-01
Phylogenetic studies based on molecular sequence alignments are expected to become more accurate as the number of sites in the alignments increases. With the advent of genomic-scale data, where alignments have very large numbers of sites, bootstrap values close to 100% and posterior probabilities close to 1 are the norm, suggesting that the number of sites is now seldom a limiting factor on phylogenetic accuracy. This provokes the question, should we be fussy about the sites we choose to include in a genomic-scale phylogenetic analysis? If some sites contain missing data, ambiguous character states, or gaps, then why not just throw them away before conducting the phylogenetic analysis? Indeed, this is exactly the approach taken in many phylogenetic studies. Here, we present an example where the decision on how to treat sites with missing data is of equal importance to decisions on taxon sampling and model choice, and we introduce a graphical method for illustrating this. PMID:23471508
Bayesian analyses of seasonal runoff forecasts
NASA Astrophysics Data System (ADS)
Krzysztofowicz, R.; Reese, S.
1991-12-01
Forecasts of seasonal snowmelt runoff volume provide indispensable information for rational decision making by water project operators, irrigation district managers, and farmers in the western United States. Bayesian statistical models and communication frames have been researched in order to enhance the forecast information disseminated to the users, and to characterize forecast skill from the decision maker's point of view. Four products are presented: (i) a Bayesian Processor of Forecasts, which provides a statistical filter for calibrating the forecasts, and a procedure for estimating the posterior probability distribution of the seasonal runoff; (ii) the Bayesian Correlation Score, a new measure of forecast skill, which is related monotonically to the ex ante economic value of forecasts for decision making; (iii) a statistical predictor of monthly cumulative runoffs within the snowmelt season, conditional on the total seasonal runoff forecast; and (iv) a framing of the forecast message that conveys the uncertainty associated with the forecast estimates to the users. All analyses are illustrated with numerical examples of forecasts for six gauging stations from the period 1971 1988.
Posterior corneal curvature changes following Refractive Small Incision Lenticule Extraction.
Ganesh, Sri; Patel, Utsav; Brar, Sheetal
2015-01-01
To compare the posterior corneal curvature changes, in terms of corneal power and asphercity, following Refractive Small Incision Lenticule Extraction (ReLEx SMILE) procedure for low, moderate, and high myopia. This retrospective, non randomized, comparative, interventional trial; included 52 eyes of 26 patients, divided in three groups: low myopia (myopia ≤3 D [diopters] spherical equivalent [SE]), moderate myopia (myopia >3 D and <6 D SE), and high myopia (myopia ≥6 D SE). All patients were treated for myopia and myopic astigmatism using ReLEx SMILE. The eyes were examined pre-operatively and 3 months post-operatively using SCHWIND SIRIUS, a three-dimensional rotating Scheimpflug camera with a Placido disc topographer to assess corneal changes with regard to keratometric power and asphericity of the cornea. A statistically significant increase in mean keratometric power in the 3, 5, and 7 mm zones of the posterior corneal surface compared with its pre-ReLEx SMILE value was detected after 3 months in the moderate myopia group (pre-operative [pre-op] -6.14±0.23, post-operative [post-op] -6.29±0.22, P<0.001) and high myopia group (pre-op -6.19±0.16, post-op -6.4±0.18, P<0.001), but there was no significant change in keratometric power of the posterior surface in the low myopia group (pre-op -5.87±0.17, post-op -6.06±0.29, P=0.143). Asphericity (Q-value) of the posterior surface changed significantly (P<0.001) after ReLEx SMILE in the moderate myopia group in the 3, 5, and 7 mm zones, and in the high myopia group in the 3 and 7 mm zones; but there was no significant change in the Q-value in the low myopia group in all three zones (pre-op 0.23±0.43, post-op -0.40±0.71, P=0.170), and in the high myopia group in the 5 mm zone (P=0.228). ReLEx SMILE causes significant changes in posterior corneal keratometric power and asphericity in moderate and high myopia, but the effect is subtle and insignificant in low myopia.
Kuragano, Masahiro; Murakami, Yota; Takahashi, Masayuki
2018-03-25
Nonmuscle myosin II (NMII) plays an essential role in directional cell migration. In this study, we investigated the roles of NMII isoforms (NMIIA and NMIIB) in the migration of human embryonic lung fibroblasts, which exhibit directionally persistent migration in an intrinsic manner. NMIIA-knockdown (KD) cells migrated unsteadily, but their direction of migration was approximately maintained. By contrast, NMIIB-KD cells occasionally reversed their direction of migration. Lamellipodium-like protrusions formed in the posterior region of NMIIB-KD cells prior to reversal of the migration direction. Moreover, NMIIB KD led to elongation of the posterior region in migrating cells, probably due to the lack of load-bearing stress fibers in this area. These results suggest that NMIIA plays a role in steering migration by maintaining stable protrusions in the anterior region, whereas NMIIB plays a role in maintenance of front-rear polarity by preventing aberrant protrusion formation in the posterior region. These distinct functions of NMIIA and NMIIB might promote intrinsic and directed migration of normal human fibroblasts. Copyright © 2018 Elsevier Inc. All rights reserved.
Inference with minimal Gibbs free energy in information field theory.
Ensslin, Torsten A; Weig, Cornelius
2010-11-01
Non-linear and non-gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a gaussian signal with unknown spectrum, and (iii) inference of a poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-gaussian posterior.
Corneal Anterior Power Calculation for an IOL in Post-PRK Patients.
De Bernardo, Maddalena; Iaccarino, Stefania; Cennamo, Michela; Caliendo, Luisa; Rosa, Nicola
2015-02-01
After corneal refractive surgery, there is an overestimation of the corneal power with the devices routinely used to measure it. Therefore, the objective of this study was to determine whether, in patients who underwent photorefractive keratectomy (PRK), it is possible to predict the earlier preoperative anterior corneal power from the postoperative (PO) posterior corneal power. A comparison is made using a formula published by Saiki for laser in situ keratomileusis patients and a new one calculated specifically from PRK patients. The Saiki formula was tested in 98 eyes of 98 patients (47 women) who underwent PRK for myopia or myopic astigmatism. Moreover, anterior and posterior mean keratometry (Km) values from a Scheimpflug camera were measured to obtain a specific regression formula. The mean (±SD) preoperative Km was 43.50 (±1.39) diopters (D) (range, 39.25 to 47.05 D). The mean (±SD) Km value calculated with the Saiki formula using the 6 months PO posterior Km was 42.94 (±1.19) D (range, 40.34 to 45.98 D) with a statistically significant difference (p < 0.001). Six months after PRK in our patients, the posterior Km was correlated with the anterior preoperative one by the following regression formula: y = -4.9707x + 12.457 (R² = 0.7656), where x is PO posterior Km and y is preoperative anterior Km, similar to the one calculated by Saiki. Care should be taken in using the Saiki formula to calculate the preoperative Km in patients who underwent PRK.
A tale of two modes: neutrino free-streaming in the early universe
NASA Astrophysics Data System (ADS)
Lancaster, Lachlan; Cyr-Racine, Francis-Yan; Knox, Lloyd; Pan, Zhen
2017-07-01
We present updated constraints on the free-streaming nature of cosmological neutrinos from cosmic microwave background (CMB) temperature and polarization power spectra, baryonic acoustic oscillation data, and distance ladder measurements of the Hubble constant. Specifically, we consider a Fermi-like four-fermion interaction between massless neutrinos, characterized by an effective coupling constant Geff, and resulting in a neutrino opacity dot tauνpropto Geff2 Tν5. Using a conservative flat prior on the parameter log10( Geff MeV2), we find a bimodal posterior distribution with two clearly separated regions of high probability. The first of these modes is consistent with the standard ΛCDM cosmology and corresponds to neutrinos decoupling at redshift zν,dec > 1.3×105, that is before the Fourier modes probed by the CMB damping tail enter the causal horizon. The other mode of the posterior, dubbed the "interacting neutrino mode", corresponds to neutrino decoupling occurring within a narrow redshift window centered around zν,dec~8300. This mode is characterized by a high value of the effective neutrino coupling constant, log10( Geff MeV2) = -1.72 ± 0.10 (68% C.L.), together with a lower value of the scalar spectral index and amplitude of fluctuations, and a higher value of the Hubble parameter. Using both a maximum likelihood analysis and the ratio of the two mode's Bayesian evidence, we find the interacting neutrino mode to be statistically disfavored compared to the standard ΛCDM cosmology, and determine this result to be largely driven by the low-l CMB temperature data. Interestingly, the addition of CMB polarization and direct Hubble constant measurements significantly raises the statistical significance of this secondary mode, indicating that new physics in the neutrino sector could help explain the difference between local measurements of H0, and those inferred from CMB data. A robust consequence of our results is that neutrinos must be free streaming long before the epoch of matter-radiation equality in order to fit current cosmological data.
T2 values of articular cartilage in clinically relevant subregions of the asymptomatic knee.
Surowiec, Rachel K; Lucas, Erin P; Fitzcharles, Eric K; Petre, Benjamin M; Dornan, Grant J; Giphart, J Erik; LaPrade, Robert F; Ho, Charles P
2014-06-01
In order for T2 mapping to become more clinically applicable, reproducible subregions and standardized T2 parameters must be defined. This study sought to: (1) define clinically relevant subregions of knee cartilage using bone landmarks identifiable on both MR images and during arthroscopy and (2) determine healthy T2 values and T2 texture parameters within these subregions. Twenty-five asymptomatic volunteers (age 18-35) were evaluated with a sagittal T2 mapping sequence. Manual segmentation was performed by three raters, and cartilage was divided into twenty-one subregions modified from the International Cartilage Repair Society Articular Cartilage Mapping System. Mean T2 values and texture parameters (entropy, variance, contrast, homogeneity) were recorded for each subregion, and inter-rater and intra-rater reliability was assessed. The central regions of the condyles had significantly higher T2 values than the posterior regions (P < 0.05) and higher variance than the posterior region on the medial side (P < 0.001). The central trochlea had significantly greater T2 values than the anterior and posterior condyles. The central lateral plateau had lower T2 values, lower variance, higher homogeneity, and lower contrast than nearly all subregions in the tibia. The central patellar regions had higher entropy than the superior and inferior regions (each P ≤ 0.001). Repeatability was good to excellent for all subregions. Significant differences in mean T2 values and texture parameters were found between subregions in this carefully selected asymptomatic population, which suggest that there is normal variation of T2 values within the knee joint. The clinically relevant subregions were found to be robust as demonstrated by the overall high repeatability.
RadVel: The Radial Velocity Modeling Toolkit
NASA Astrophysics Data System (ADS)
Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan
2018-04-01
RadVel is an open-source Python package for modeling Keplerian orbits in radial velocity (RV) timeseries. RadVel provides a convenient framework to fit RVs using maximum a posteriori optimization and to compute robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel allows users to float or fix parameters, impose priors, and perform Bayesian model comparison. We have implemented real-time MCMC convergence tests to ensure adequate sampling of the posterior. RadVel can output a number of publication-quality plots and tables. Users may interface with RadVel through a convenient command-line interface or directly from Python. The code is object-oriented and thus naturally extensible. We encourage contributions from the community. Documentation is available at http://radvel.readthedocs.io.
Pippi — Painless parsing, post-processing and plotting of posterior and likelihood samples
NASA Astrophysics Data System (ADS)
Scott, Pat
2012-11-01
Interpreting samples from likelihood or posterior probability density functions is rarely as straightforward as it seems it should be. Producing publication-quality graphics of these distributions is often similarly painful. In this short note I describe pippi, a simple, publicly available package for parsing and post-processing such samples, as well as generating high-quality PDF graphics of the results. Pippi is easily and extensively configurable and customisable, both in its options for parsing and post-processing samples, and in the visual aspects of the figures it produces. I illustrate some of these using an existing supersymmetric global fit, performed in the context of a gamma-ray search for dark matter. Pippi can be downloaded and followed at http://github.com/patscott/pippi.
Determining X-ray source intensity and confidence bounds in crowded fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu
We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less
Guggenberger, R; Winklhofer, S; Osterhoff, G; Wanner, G A; Fortunati, M; Andreisek, G; Alkadhi, H; Stolzmann, P
2012-11-01
To evaluate optimal monoenergetic dual-energy computed tomography (DECT) settings for artefact reduction of posterior spinal fusion implants of various vendors and spine levels. Posterior spinal fusion implants of five vendors for cervical, thoracic and lumbar spine were examined ex vivo with single-energy (SE) CT (120 kVp) and DECT (140/100 kVp). Extrapolated monoenergetic DECT images at 64, 69, 88, 105 keV and individually adjusted monoenergy for optimised image quality (OPTkeV) were generated. Two independent radiologists assessed quantitative and qualitative image parameters for each device and spine level. Inter-reader agreements of quantitative and qualitative parameters were high (ICC = 0.81-1.00, κ = 0.54-0.77). HU values of spinal fusion implants were significantly different among vendors (P < 0.001), spine levels (P < 0.01) and among SECT, monoenergetic DECT of 64, 69, 88, 105 keV and OPTkeV (P < 0.01). Image quality was significantly (P < 0.001) different between datasets and improved with higher monoenergies of DECT compared with SECT (V = 0.58, P < 0.001). Artefacts decreased significantly (V = 0.51, P < 0.001) at higher monoenergies. OPTkeV values ranged from 123-141 keV. OPTkeV according to vendor and spine level are presented herein. Monoenergetic DECT provides significantly better image quality and less metallic artefacts from implants than SECT. Use of individual keV values for vendor and spine level is recommended. • Artefacts pose problems for CT following posterior spinal fusion implants. • CT images are interpreted better with monoenergetic extrapolation using dual-energy (DE) CT. • DECT extrapolation improves image quality and reduces metallic artefacts over SECT. • There were considerable differences in monoenergy values among vendors and spine levels. • Use of individualised monoenergy values is indicated for different metallic hardware devices.
Lantsberg, L; Goldman, M
1991-04-01
The level of amputation continues to present a challenge for surgeons. In view of this, 24 patients who required an amputation of their ischaemic leg were studied prospectively using Laser Doppler flowmetry (LDF), TcpO2 measurements and Doppler ultrasound to assess the best level for amputation. In all patients gangrene of the leg and rest pain were the indication for an amputation. Skin oxygen tension (TcpO2) and skin blood flow (LDF) measurements were obtained the day before surgery on the proposed anterior and posterior skin flaps for below knee amputation and the maximum Doppler systolic pressure was measured. The level of amputation was chosen at surgery by clinical judgement without reference to the measurements mentioned above. A below knee amputation was performed in 17 patients and an above knee in seven. All amputations healed by primary intention. Doppler pressures showed poor discrimination with a median value of 10 mmHg (0-25) in AK patients and 35 mmHg (0-85) in the BK group (p greater than 0.05). In contrast TcpO2 showed a trend. In the BK group the median value was 20 mmHg (4-50) on the anterior and 22 mmHg (2-60) on the posterior flap compared to above knee amputees with median values of 6 mmHg (2-11) and 8 mmHg (3-38), respectively (p greater than 0.05). Laser Doppler seemed more useful. In BK patients the median LDF values were 36 mV (20-85) on the anterior and 34 mV (20-80) on the posterior flap with median LDF values of 10 mV (10-18) on the anterior and 11 mV (8-38) on the posterior flap in the above knee group (p less than 0.01). Laser Doppler flowmetry is a simple objective test, which is a better discriminator of skin flap perfusion than either TcpO2 or Doppler ankle pressures.
Polar Value Analysis of Low to Moderate Astigmatism with Wavefront-Guided Sub-Bowman Keratomileusis
Zhang, Yu
2017-01-01
Purpose To evaluate the astigmatic outcomes of wavefront-guided sub-Bowman keratomileusis (WFG-SBK) for low to moderate myopic astigmatism. Methods This study enrolled 100 right eyes from 100 patients who underwent WFG-SBK for the correction of myopia and astigmatism. The polar value method was performed with anterior and posterior corneal astigmatism measured with Scheimpflug camera combined with Placido corneal topography (Sirius, CSO) and refractive astigmatism preoperatively and 1 month, 3 months, and 6 months postoperatively. Results Similar results for surgically induced astigmatism (SIA) and error of the procedure in both anterior corneal astigmatism (ACA) and total ocular astigmatism (TOA). There was a minor undercorrection of the cylinder in both ACA and TOA. Posterior corneal astigmatism (PCA) showed no significant change. Conclusions Wavefront-guided SBK could provide good astigmatic outcomes for the correction of low to moderate myopic astigmatism. The surgical effects were largely attributed to the astigmatic correction of the anterior corneal surface. Posterior corneal astigmatism remained unchanged even after WFG-SBK for myopic astigmatism. Polar value analysis can be used to guide adjustments to the treatment cylinder alongside a nomogram designed to optimize postoperative astigmatic outcomes in myopic WFG-SBK. PMID:28831306
1983-09-01
Ciencia y Tecnologia -Mexico, by ONR under Contract No. N00014-77-C-0675, and by ARO under Contract No. DAAG29-80-K-0042. LUJ THE VIE~W, rTIJ. ’~v ’’~c...Department of Statis- tics. For financial support I thank the Consejo Nacional de Ciencia y Tecnologia - Mexico, and the Department of Statistics of the
Failure analysis of various monolithic posterior aesthetic dental crowns using finite element method
NASA Astrophysics Data System (ADS)
Porojan, Liliana; Topală, Florin
2017-08-01
The aim of the study was to assess the effect of material stiffness and load on the biomechanical performance of the monolithic full-coverage posterior aesthetic dental crowns using finite element analysis. Three restorative materials for monolithic dental crowns were selected for the study: zirconia; lithium disilicate glass-ceramic, and resin-based composite. Stresses were calculated in the crowns for all materials and in the teeth structures, under different load values. The experiments show that dental crowns made from all this new aesthetic materials processed by CAD/CAM technologies would be indicated as monolithic dental crowns for posterior areas.
Finite element model updating using the shadow hybrid Monte Carlo technique
NASA Astrophysics Data System (ADS)
Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.
2015-02-01
Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.
Sihota, Ramanjit; Goyal, Amita; Kaur, Jasbir; Gupta, Viney; Nag, Tapas C
2012-01-01
To study ultrastructural changes of the trabecular meshwork in acute and chronic primary angle closure glaucoma (PACG) and primary open angle glaucoma (POAG) eyes by scanning electron microscopy. Twenty-one trabecular meshwork surgical specimens from consecutive glaucomatous eyes after a trabeculectomy and five postmortem corneoscleral specimens were fixed immediately in Karnovsky solution. The tissues were washed in 0.1 M phosphate buffer saline, post-fixed in 1% osmium tetraoxide, dehydrated in acetone series (30-100%), dried and mounted. Normal trabecular tissue showed well-defined, thin, cylindrical uveal trabecular beams with many large spaces, overlying flatter corneoscleral beams and numerous smaller spaces. In acute PACG eyes, the trabecular meshwork showed grossly swollen, irregular trabecular endothelial cells with intercellular and occasional basal separation with few spaces. Numerous activated macrophages, leucocytes and amorphous debris were present. Chronic PACG eyes had a few, thickened posterior uveal trabecular beams visible. A homogenous deposit covered the anterior uveal trabeculae and spaces. Converging, fan-shaped trabecular beam configuration corresponded to gonioscopic areas of peripheral anterior synechiae. In POAG eyes, anterior uveal trabecular beams were thin and strap-like, while those posteriorly were wide, with a homogenous deposit covering and bridging intertrabecular spaces, especially posteriorly. Underlying corneoscleral trabecular layers and spaces were visualized in some areas. In acute PACG a marked edema of the endothelium probably contributes for the acute and marked intraocular pressure (IOP) elevation. Chronically raised IOP in chronic PACG and POAG probably results, at least in part, from decreased aqueous outflow secondary to widening and fusion of adjacent trabecular beams, together with the homogenous deposit enmeshing trabecular beams and spaces.
Probabilistically modeling lava flows with MOLASSES
NASA Astrophysics Data System (ADS)
Richardson, J. A.; Connor, L.; Connor, C.; Gallant, E.
2017-12-01
Modeling lava flows through Cellular Automata methods enables a computationally inexpensive means to quickly forecast lava flow paths and ultimate areal extents. We have developed a lava flow simulator, MOLASSES, that forecasts lava flow inundation over an elevation model from a point source eruption. This modular code can be implemented in a deterministic fashion with given user inputs that will produce a single lava flow simulation. MOLASSES can also be implemented in a probabilistic fashion where given user inputs define parameter distributions that are randomly sampled to create many lava flow simulations. This probabilistic approach enables uncertainty in input data to be expressed in the model results and MOLASSES outputs a probability map of inundation instead of a determined lava flow extent. Since the code is comparatively fast, we use it probabilistically to investigate where potential vents are located that may impact specific sites and areas, as well as the unconditional probability of lava flow inundation of sites or areas from any vent. We have validated the MOLASSES code to community-defined benchmark tests and to the real world lava flows at Tolbachik (2012-2013) and Pico do Fogo (2014-2015). To determine the efficacy of the MOLASSES simulator at accurately and precisely mimicking the inundation area of real flows, we report goodness of fit using both model sensitivity and the Positive Predictive Value, the latter of which is a Bayesian posterior statistic. Model sensitivity is often used in evaluating lava flow simulators, as it describes how much of the lava flow was successfully modeled by the simulation. We argue that the positive predictive value is equally important in determining how good a simulator is, as it describes the percentage of the simulation space that was actually inundated by lava.
Brooks, Billy; McBee, Matthew; Pack, Robert; Alamian, Arsham
2017-05-01
Rates of accidental overdose mortality from substance use disorder (SUD) have risen dramatically in the United States since 1990. Between 1999 and 2004 alone rates increased 62% nationwide, with rural overdose mortality increasing at a rate 3 times that seen in urban populations. Cultural differences between rural and urban populations (e.g., educational attainment, unemployment rates, social characteristics, etc.) affect the nature of SUD, leading to disparate risk of overdose across these communities. Multiple-groups latent class analysis with covariates was applied to data from the 2011 and 2012 National Survey on Drug Use and Health (n=12.140) to examine potential differences in latent classifications of SUD between rural and urban adult (aged 18years and older) populations. Nine drug categories were used to identify latent classes of SUD defined by probability of diagnosis within these categories. Once the class structures were established for rural and urban samples, posterior membership probabilities were entered into a multinomial regression analysis of socio-demographic predictors' association with the likelihood of SUD latent class membership. Latent class structures differed across the sub-groups, with the rural sample fitting a 3-class structure (Bootstrap Likelihood Ratio Test P value=0.03) and the urban fitting a 6-class model (Bootstrap Likelihood Ratio Test P value<0.0001). Overall the rural class structure exhibited less diversity in class structure and lower prevalence of SUD in multiple drug categories (e.g. cocaine, hallucinogens, and stimulants). This result supports the hypothesis that different underlying elements exist in the two populations that affect SUD patterns, and thus can inform the development of surveillance instruments, clinical services, and prevention programming tailored to specific communities. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fujimoto, Eisaku; Sasashige, Yoshiaki; Masuda, Yasuji; Hisatome, Takashi; Eguchi, Akio; Masuda, Tetsuo; Sawa, Mikiya; Nagata, Yoshinori
2013-12-01
The intra-operative femorotibial joint gap and ligament balance, the predictors affecting these gaps and their balances, as well as the postoperative knee flexion, were examined. These factors were assessed radiographically after a posterior cruciate-retaining total knee arthroplasty (TKA). The posterior condylar offset and posterior tibial slope have been reported as the most important intra-operative factors affecting cruciate-retaining-type TKAs. The joint gap and balance have not been investigated in assessments of the posterior condylar offset and the posterior tibial slope. The femorotibial gap and medial/lateral ligament balance were measured with an offset-type tensor. The femorotibial gaps were measured at 0°, 45°, 90° and 135° of knee flexion, and various gap changes were calculated at 0°-90° and 0°-135°. Cruciate-retaining-type arthroplasties were performed in 98 knees with varus osteoarthritis. The 0°-90° femorotibial gap change was strongly affected by the posterior condylar offset value (postoperative posterior condylar offset subtracted by the preoperative posterior condylar offset). The 0°-135° femorotibial gap change was significantly correlated with the posterior tibial slope and the 135° medial/lateral ligament balance. The postoperative flexion angle was positively correlated with the preoperative flexion angle, γ angle and the posterior tibial slope. Multiple-regression analysis demonstrated that the preoperative flexion angle, γ angle, posterior tibial slope and 90° medial/lateral ligament balance were significant independent factors for the postoperative knee flexion angle. The flexion angle change (postoperative flexion angle subtracted by the preoperative flexion angle) was also strongly correlated with the preoperative flexion angle, posterior tibial slope and 90° medial/lateral ligament balance. The postoperative flexion angle is affected by multiple factors, especially in cruciate-retaining-type TKAs. However, it is important to pay attention not only to the posterior tibial slope, but also to the flexion medial/lateral ligament balance during surgery. A cruciate-retaining-type TKA has the potential to achieve both stability and a wide range of motion and to improve the patients' activities of daily living.
Fanshawe, T. R.
2015-01-01
There are many examples from the scientific literature of visual search tasks in which the length, scope and success rate of the search have been shown to vary according to the searcher's expectations of whether the search target is likely to be present. This phenomenon has major practical implications, for instance in cancer screening, when the prevalence of the condition is low and the consequences of a missed disease diagnosis are severe. We consider this problem from an empirical Bayesian perspective to explain how the effect of a low prior probability, subjectively assessed by the searcher, might impact on the extent of the search. We show how the searcher's posterior probability that the target is present depends on the prior probability and the proportion of possible target locations already searched, and also consider the implications of imperfect search, when the probability of false-positive and false-negative decisions is non-zero. The theoretical results are applied to two studies of radiologists' visual assessment of pulmonary lesions on chest radiographs. Further application areas in diagnostic medicine and airport security are also discussed. PMID:26587267
VizieR Online Data Catalog: Close encounters to the Sun in Gaia DR1 (Bailer-Jones, 2018)
NASA Astrophysics Data System (ADS)
Bailer-Jones, C. A. L.
2017-08-01
The table gives the perihelion (closest approach) parameters of stars in the Gaia-DR1 TGAS catalogue which are found by numerical integration through a Galactic potential to approach within 10pc of the Sun. These parameters are the time (relative to the Gaia measurement epoch), heliocentric distance, and heliocentric speed of the star at perihelion. Uncertainties in these have been calculated by a Monte Carlo sampling of the data to give the posterior probability density function (PDF) over the parameters. For each parameter three summary values of this PDF are reported: the median, the 5% lower bound, the 95% upper bound. The latter two give a 90% confidence interval. The table also reports the probability that each star approaches the Sun within 0.5, 1.0, and 2.0pc, as well as the measured parallax, proper motion, and radial velocity (plus uncertainties) of the stars. Table 3 in the article lists the first 20 lines of this data table (stars with median perihelion distances below 2pc). Some stars are duplicated in this table, i.e. there are rows with the same ID, but different data. Stars with problematic data have not been removed, so some encounters are not reliable. Most IDs are Tycho, but in a few cases they are Hipparcos. (1 data file).
Three Insights from a Bayesian Interpretation of the One-Sided "P" Value
ERIC Educational Resources Information Center
Marsman, Maarten; Wagenmakers, Eric-Jan
2017-01-01
P values have been critiqued on several grounds but remain entrenched as the dominant inferential method in the empirical sciences. In this article, we elaborate on the fact that in many statistical models, the one-sided "P" value has a direct Bayesian interpretation as the approximate posterior mass for values lower than zero. The…
Multiple Neural Mechanisms of Decision Making and Their Competition under Changing Risk Pressure
Kolling, Nils; Wittmann, Marco; Rushworth, Matthew F.S.
2014-01-01
Summary Sometimes when a choice is made, the outcome is not guaranteed and there is only a probability of its occurrence. Each individual’s attitude to probability, sometimes called risk proneness or aversion, has been assumed to be static. Behavioral ecological studies, however, suggest such attitudes are dynamically modulated by the context an organism finds itself in; in some cases, it may be optimal to pursue actions with a low probability of success but which are associated with potentially large gains. We show that human subjects rapidly adapt their use of probability as a function of current resources, goals, and opportunities for further foraging. We demonstrate that dorsal anterior cingulate cortex (dACC) carries signals indexing the pressure to pursue unlikely choices and signals related to the taking of such choices. We show that dACC exerts this control over behavior when it, rather than ventromedial prefrontal cortex, interacts with posterior cingulate cortex. PMID:24607236
Yu, Fang; Chen, Ming-Hui; Kuo, Lynn; Talbott, Heather; Davis, John S
2015-08-07
Recently, the Bayesian method becomes more popular for analyzing high dimensional gene expression data as it allows us to borrow information across different genes and provides powerful estimators for evaluating gene expression levels. It is crucial to develop a simple but efficient gene selection algorithm for detecting differentially expressed (DE) genes based on the Bayesian estimators. In this paper, by extending the two-criterion idea of Chen et al. (Chen M-H, Ibrahim JG, Chi Y-Y. A new class of mixture models for differential gene expression in DNA microarray data. J Stat Plan Inference. 2008;138:387-404), we propose two new gene selection algorithms for general Bayesian models and name these new methods as the confident difference criterion methods. One is based on the standardized differences between two mean expression values among genes; the other adds the differences between two variances to it. The proposed confident difference criterion methods first evaluate the posterior probability of a gene having different gene expressions between competitive samples and then declare a gene to be DE if the posterior probability is large. The theoretical connection between the proposed first method based on the means and the Bayes factor approach proposed by Yu et al. (Yu F, Chen M-H, Kuo L. Detecting differentially expressed genes using alibrated Bayes factors. Statistica Sinica. 2008;18:783-802) is established under the normal-normal-model with equal variances between two samples. The empirical performance of the proposed methods is examined and compared to those of several existing methods via several simulations. The results from these simulation studies show that the proposed confident difference criterion methods outperform the existing methods when comparing gene expressions across different conditions for both microarray studies and sequence-based high-throughput studies. A real dataset is used to further demonstrate the proposed methodology. In the real data application, the confident difference criterion methods successfully identified more clinically important DE genes than the other methods. The confident difference criterion method proposed in this paper provides a new efficient approach for both microarray studies and sequence-based high-throughput studies to identify differentially expressed genes.
Michener, Lori A.; Doukas, William C.; Murphy, Kevin P.; Walsworth, Matthew K.
2011-01-01
Context: Type I superior labrum anterior-posterior (SLAP) lesions involve degenerative fraying and probably are not the cause of shoulder pain. Type II to IV SLAP lesions are tears of the labrum. Objective: To determine the diagnostic accuracy of patient history and the active compression, anterior slide, and crank tests for type I and type II to IV SLAP lesions. Design: Cohort study. Setting: Clinic. Patients or Other Participants: Fifty-five patients (47 men, 8 women; age = 40.6 ± 15.1 years) presenting with shoulder pain. Intervention(s): For each patient, an orthopaedic surgeon conducted a clinical examination of history of trauma; sudden onset of symptoms; history of popping, clicking, or catching; age; and active compression, crank, and anterior slide tests. The reference standard was the intraoperative diagnosis. The operating surgeon was blinded to the results of the clinical examination. Main Outcome Measure(s): Diagnostic utility was calculated using the receiver operating characteristic curve and area under the curve (AUC), sensitivity, specificity, positive likelihood ratio (+LR), and negative likelihood ratio (−LR). Forward stepwise binary regression was used to determine a combination of tests for diagnosis. Results: No history item or physical examination test had diagnostic accuracy for type I SLAP lesions (n = 13). The anterior slide test had utility (AUC = 0.70, +LR = 2.25, −LR = 0.44) to confirm and exclude type II to IV SLAP lesions (n = 10). The combination of a history of popping, clicking, or catching and the anterior slide test demonstrated diagnostic utility for confirming type II to IV SLAP lesions (+LR = 6.00). Conclusions: The anterior slide test had limited diagnostic utility for confirming and excluding type II to IV SLAP lesions; diagnostic values indicated only small shifts in probability. However, the combination of the anterior slide test with a history of popping, clicking, or catching had moderate diagnostic utility for confirming type II to IV SLAP lesions. No single item or combination of history items and physical examination tests had diagnostic utility for type I SLAP lesions. PMID:21944065
NASA Astrophysics Data System (ADS)
Arhonditsis, George B.; Papantou, Dimitra; Zhang, Weitao; Perhar, Gurbir; Massos, Evangelia; Shi, Molu
2008-09-01
Aquatic biogeochemical models have been an indispensable tool for addressing pressing environmental issues, e.g., understanding oceanic response to climate change, elucidation of the interplay between plankton dynamics and atmospheric CO 2 levels, and examination of alternative management schemes for eutrophication control. Their ability to form the scientific basis for environmental management decisions can be undermined by the underlying structural and parametric uncertainty. In this study, we outline how we can attain realistic predictive links between management actions and ecosystem response through a probabilistic framework that accommodates rigorous uncertainty analysis of a variety of error sources, i.e., measurement error, parameter uncertainty, discrepancy between model and natural system. Because model uncertainty analysis essentially aims to quantify the joint probability distribution of model parameters and to make inference about this distribution, we believe that the iterative nature of Bayes' Theorem is a logical means to incorporate existing knowledge and update the joint distribution as new information becomes available. The statistical methodology begins with the characterization of parameter uncertainty in the form of probability distributions, then water quality data are used to update the distributions, and yield posterior parameter estimates along with predictive uncertainty bounds. Our illustration is based on a six state variable (nitrate, ammonium, dissolved organic nitrogen, phytoplankton, zooplankton, and bacteria) ecological model developed for gaining insight into the mechanisms that drive plankton dynamics in a coastal embayment; the Gulf of Gera, Island of Lesvos, Greece. The lack of analytical expressions for the posterior parameter distributions was overcome using Markov chain Monte Carlo simulations; a convenient way to obtain representative samples of parameter values. The Bayesian calibration resulted in realistic reproduction of the key temporal patterns of the system, offered insights into the degree of information the data contain about model inputs, and also allowed the quantification of the dependence structure among the parameter estimates. Finally, our study uses two synthetic datasets to examine the ability of the updated model to provide estimates of predictive uncertainty for water quality variables of environmental management interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korah, Mariam P., E-mail: mariam.philip@gmail.com; Deyrup, Andrea T.; Monson, David K.
2012-02-01
Purpose: To examine the influence of anatomic location in the upper extremity (UE) vs. lower extremity (LE) on the presentation and outcomes of adult soft tissue sarcomas (STS). Methods and Materials: From 2001 to 2008, 118 patients underwent limb-sparing surgery (LSS) and external beam radiotherapy (RT) with curative intent for nonrecurrent extremity STS. RT was delivered preoperatively in 96 and postoperatively in 22 patients. Lesions arose in the UE in 28 and in the LE in 90 patients. Patients with UE lesions had smaller tumors (4.5 vs. 9.0 cm, p < 0.01), were more likely to undergo a prior excisionmore » (43 vs. 22%, p = 0.03), to have close or positive margins after resection (71 vs. 49%, p = 0.04), and to undergo postoperative RT (32 vs. 14%, p = 0.04). Results: Five-year actuarial local recurrence-free and distant metastasis-free survival rates for the entire group were 85 and 74%, with no difference observed between the UE and LE cohorts. Five-year actuarial probability of wound reoperation rates were 4 vs. 29% (p < 0.01) in the UE and LE respectively. Thigh lesions accounted for 84% of the required wound reoperations. The distribution of tumors within the anterior, medial, and posterior thigh compartments was 51%, 26%, and 23%. Subset analysis by compartment showed no difference in the probability of wound reoperation between the anterior and medial/posterior compartments (29 vs. 30%, p = 0.68). Neurolysis was performed during resection in (15%, 5%, and 67%, p < 0.01) of tumors in the anterior, medial, and posterior compartments. Conclusions: Tumors in the UE and LE differ significantly with respect to size and management details. The anatomy of the UE poses technical impediments to an R0 resection. Thigh tumors are associated with higher wound reoperation rates. Tumor resection in the posterior thigh compartment is more likely to result in nerve injury. A better understanding of the inherent differences between tumors in various extremity sites will assist in individualizing treatment.« less
Sun, Chuan-bin; You, Yong-sheng; Liu, Zhe; Zheng, Lin-yan; Chen, Pei-qing; Yao, Ke; Xue, An-quan
2016-01-01
To investigate the morphological characteristics of myopic macular retinoschisis (MRS) in teenagers with high myopia, six male (9 eyes) and 3 female (4 eyes) teenagers with typical MRS identified from chart review were evaluated. All cases underwent complete ophthalmic examinations including best corrected visual acuity (BCVA), indirect ophthalmoscopy, colour fundus photography, B-type ultrasonography, axial length measurement, and spectral-domain optical coherence tomography (SD-OCT). The average age was 17.8 ± 1.5 years, average refractive error was −17.04 ± 3.04D, average BCVA was 0.43 ± 0.61, and average axial length was 30.42 ± 1.71 mm. Myopic macular degenerative changes (MDC) by colour fundus photographs revealed Ohno-Matsui Category 1 in 4 eyes, and Category 2 in 9 eyes. Posterior staphyloma was found in 9 eyes. SD-OCT showed outer MRS in all 13 eyes, internal limiting membrane detachment in 7 eyes, vascular microfolds in 2 eyes, and inner MRS in 1 eye. No premacular structures such as macular epiretinal membrane or partially detached posterior hyaloids were found. Our results showed that MRS rarely occurred in highly myopic teenagers, and was not accompanied by premacular structures, severe MDC, or even obvious posterior staphyloma. This finding indicates that posterior scleral expansion is probably the main cause of MRS. PMID:27294332
Random Partition Distribution Indexed by Pairwise Information
Dahl, David B.; Day, Ryan; Tsai, Jerry W.
2017-01-01
We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318
NASA Astrophysics Data System (ADS)
Pan, J.; Durand, M. T.; Vanderjagt, B. J.
2015-12-01
Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.
Del Águila-Carrasco, Antonio J; Domínguez-Vicent, Alberto; Pérez-Vives, Cari; Ferrer-Blasco, Teresa; Montés-Micó, Robert
2015-01-01
To assess the effect of different disposable soft contact lenses on several corneal parameters-thickness, anterior and posterior curvature, and volume-by means of a Scheimpflug imaging-based device (Pentacam HR). Diurnal variations of these parameters were taken into account. Twenty-one young, healthy subjects wore 4 different types of daily disposable soft contact lenses on 4 different days: Dailies AquaComfort Plus, SofLens, Dailies Total1, and Acuvue TruEye. The lenses had different material and water content. Pachymetry and curvature maps and corneal volume values were obtained using the Pentacam HR twice a day: one before putting the lens on and one after an 8-hour period of contact lens wear. Measurements were also taken without any contact lenses being worn. Regarding corneal thickness, the lens with the most similar behavior to the naked eye scenario was the Dailies Total1, causing a thickening of 0.2 ± 0.1% in the central zone and 0.6 ± 0.2% in the periphery. All 4 lenses caused a slight but not significant flattening in the anterior corneal curvature, whereas the posterior corneal curvature only experienced a significant but small steepening with the SofLens. The use of these lenses increased corneal volume slightly. Variations in corneal parameters seem to depend on the type of contact lens used (material, oxygen transmissibility, water content). However, the magnitude of the changes introduced by the use of soft contact lenses over the 8-hour period was small and probably not large enough to influence either visual acuity or comfort.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
Azevedo, Luciene Ferreira; Perlingeiro, Patricia; Hachul, Denise Tessariol; Gomes-Santos, Igor Lucas; Tsutsui, Jeane Mike; Negrao, Carlos Eduardo; De Matos, Luciana D N J
2016-01-01
Different season trainings may influence autonomic and non-autonomic cardiac control of heart rate and provokes specific adaptations on heart's structure in athletes. We investigated the influence of transition training (TT) and competitive training (CT) on resting heart rate, its mechanisms of control, spontaneous baroreflex sensitivity (BRS) and relationships between heart rate mechanisms and cardiac structure in professional cyclists (N = 10). Heart rate (ECG) and arterial blood pressure (Pulse Tonometry) were recorded continuously. Autonomic blockade was performed (atropine-0.04 mg.kg-1; esmolol-500 μg.kg-1 = 0.5 mg). Vagal effect, intrinsic heart rate, parasympathetic (n) and sympathetic (m) modulations, autonomic influence, autonomic balance and BRS were calculated. Plasma norepinephrine (high-pressure liquid chromatography) and cardiac structure (echocardiography) were evaluated. Resting heart rate was similar in TT and CT. However, vagal effect, intrinsic heart rate, autonomic influence and parasympathetic modulation (higher n value) decreased in CT (P≤0.05). Sympathetic modulation was similar in both trainings. The autonomic balance increased in CT but still showed parasympathetic predominance. Cardiac diameter, septum and posterior wall thickness and left ventricular mass also increased in CT (P<0.05) as well as diastolic function. We observed an inverse correlation between left ventricular diastolic diameter, septum and posterior wall thickness and left ventricular mass with intrinsic heart rate. Blood pressure and BRS were similar in both trainings. Intrinsic heart rate mechanism is predominant over vagal effect during CT, despite similar resting heart rate. Preserved blood pressure levels and BRS during CT are probably due to similar sympathetic modulation in both trainings.
Current concepts of etiology and treatment of chondromalacia patellae.
Bentley, G; Dowd, G
1984-10-01
Chondromalacia patellae is a distinct clinical entity characterized by retropatellar pain that is associated with recognizable changes in the articular cartilage of the posterior surface of the patella. Several factors may be involved in the etiology, such as severe patella alta, trauma, and, in rare cases, abnormal patellar tracking. Clinical symptoms and signs are reliable in only 50% of cases, but measurable quadriceps wasting, palpable patellofemoral crepitus, and effusion are strongly suggestive. Diagnosis must be confirmed by arthroscopy or direct examination of the posterior surface of the patella. Radiologic measurements of patellofemoral relations are of limited value in diagnosis. The initial pathologic finding is usually surface cartilage breakdown. Radioisotope studies show cartilage cell replication which suggests a healing capacity in early cases following treatment that alters the load through the affected cartilage. There is no evidence of progression to patellofemoral osteoarthritis, which is probably a different entity. The treatment should be conservative where possible with isometric quadriceps exercises and simple anti-inflammatory drugs such as aspirin. Operative treatment is indicated for patients with persistent pain and macroscopic involvement of more than half a centimeter of the articular cartilage surface. The simplest effective procedure that avoids quadriceps dysfunction and fibrosis is a distal patellar tendon medial realignment with lateral release and medial reefing of the quadriceps expansion. Patellectomy is indicated in more extensive involvement of the patella of 2 or more centimeters in diameter, but this must be performed only when the patient has excellent quadriceps function before surgery and is motivated to exercise after surgery.
Ibáñez-Escriche, N; López de Maturana, E; Noguera, J L; Varona, L
2010-11-01
We developed and implemented change-point recursive models and compared them with a linear recursive model and a standard mixed model (SMM), in the scope of the relationship between litter size (LS) and number of stillborns (NSB) in pigs. The proposed approach allows us to estimate the point of change in multiple-segment modeling of a nonlinear relationship between phenotypes. We applied the procedure to a data set provided by a commercial Large White selection nucleus. The data file consisted of LS and NSB records of 4,462 parities. The results of the analysis clearly identified the location of the change points between different structural regression coefficients. The magnitude of these coefficients increased with LS, indicating an increasing incidence of LS on the NSB ratio. However, posterior distributions of correlations were similar across subpopulations (defined by the change points on LS), except for those between residuals. The heritability estimates of NSB did not present differences between recursive models. Nevertheless, these heritabilities were greater than those obtained for SMM (0.05) with a posterior probability of 85%. These results suggest a nonlinear relationship between LS and NSB, which supports the adequacy of a change-point recursive model for its analysis. Furthermore, the results from model comparisons support the use of recursive models. However, the adequacy of the different recursive models depended on the criteria used: the linear recursive model was preferred on account of its smallest deviance value, whereas nonlinear recursive models provided a better fit and predictive ability based on the cross-validation approach.
Web processing service for landslide hazard assessment
NASA Astrophysics Data System (ADS)
Sandric, I.; Ursaru, P.; Chitu, D.; Mihai, B.; Savulescu, I.
2012-04-01
Hazard analysis requires heavy computation and specialized software. Web processing services can offer complex solutions that can be accessed through a light client (web or desktop). This paper presents a web processing service (both WPS and Esri Geoprocessing Service) for landslides hazard assessment. The web processing service was build with Esri ArcGIS Server solution and Python, developed using ArcPy, GDAL Python and NumPy. A complex model for landslide hazard analysis using both predisposing and triggering factors combined into a Bayesian temporal network with uncertainty propagation was build and published as WPS and Geoprocessing service using ArcGIS Standard Enterprise 10.1. The model uses as predisposing factors the first and second derivatives from DEM, the effective precipitations, runoff, lithology and land use. All these parameters can be served by the client from other WFS services or by uploading and processing the data on the server. The user can select the option of creating the first and second derivatives from the DEM automatically on the server or to upload the data already calculated. One of the main dynamic factors from the landslide analysis model is leaf area index. The LAI offers the advantage of modelling not just the changes from different time periods expressed in years, but also the seasonal changes in land use throughout a year. The LAI index can be derived from various satellite images or downloaded as a product. The upload of such data (time series) is possible using a NetCDF file format. The model is run in a monthly time step and for each time step all the parameters values, a-priory, conditional and posterior probability are obtained and stored in a log file. The validation process uses landslides that have occurred during the period up to the active time step and checks the records of the probabilities and parameters values for those times steps with the values of the active time step. Each time a landslide has been positive identified new a-priory probabilities are recorded for each parameter. A complete log for the entire model is saved and used for statistical analysis and a NETCDF file is created and it can be downloaded from the server with the log file
Reddy, Vivek Y; Sievert, Horst; Halperin, Jonathan; Doshi, Shephal K; Buchbinder, Maurice; Neuzil, Petr; Huber, Kenneth; Whisenant, Brian; Kar, Saibal; Swarup, Vijay; Gordon, Nicole; Holmes, David
2014-11-19
While effective in preventing stroke in patients with atrial fibrillation (AF), warfarin is limited by a narrow therapeutic profile, a need for lifelong coagulation monitoring, and multiple drug and diet interactions. To determine whether a local strategy of mechanical left atrial appendage (LAA) closure was noninferior to warfarin. PROTECT AF was a multicenter, randomized (2:1), unblinded, Bayesian-designed study conducted at 59 hospitals of 707 patients with nonvalvular AF and at least 1 additional stroke risk factor (CHADS2 score ≥1). Enrollment occurred between February 2005 and June 2008 and included 4-year follow-up through October 2012. Noninferiority required a posterior probability greater than 97.5% and superiority a probability of 95% or greater; the noninferiority margin was a rate ratio of 2.0 comparing event rates between treatment groups. Left atrial appendage closure with the device (n = 463) or warfarin (n = 244; target international normalized ratio, 2-3). A composite efficacy end point including stroke, systemic embolism, and cardiovascular/unexplained death, analyzed by intention-to-treat. At a mean (SD) follow-up of 3.8 (1.7) years (2621 patient-years), there were 39 events among 463 patients (8.4%) in the device group for a primary event rate of 2.3 events per 100 patient-years, compared with 34 events among 244 patients (13.9%) for a primary event rate of 3.8 events per 100 patient-years with warfarin (rate ratio, 0.60; 95% credible interval, 0.41-1.05), meeting prespecified criteria for both noninferiority (posterior probability, >99.9%) and superiority (posterior probability, 96.0%). Patients in the device group demonstrated lower rates of both cardiovascular mortality (1.0 events per 100 patient-years for the device group [17/463 patients, 3.7%] vs 2.4 events per 100 patient-years with warfarin [22/244 patients, 9.0%]; hazard ratio [HR], 0.40; 95% CI, 0.21-0.75; P = .005) and all-cause mortality (3.2 events per 100 patient-years for the device group [57/466 patients, 12.3%] vs 4.8 events per 100 patient-years with warfarin [44/244 patients, 18.0%]; HR, 0.66; 95% CI, 0.45-0.98; P = .04). After 3.8 years of follow-up among patients with nonvalvular AF at elevated risk for stroke, percutaneous LAA closure met criteria for both noninferiority and superiority, compared with warfarin, for preventing the combined outcome of stroke, systemic embolism, and cardiovascular death, as well as superiority for cardiovascular and all-cause mortality. clinicaltrials.gov Identifier: NCT00129545.
Safa, Ben; Gollish, Jeffrey; Haslam, Lynn; McCartney, Colin J L
2014-06-01
Peripheral nerve blocks appear to provide effective analgesia for patients undergoing total knee arthroplasty. Although the literature supports the use of femoral nerve block, addition of sciatic nerve block is controversial. In this study we investigated the value of sciatic nerve block and an alternative technique of posterior capsule local anesthetic infiltration analgesia. 100 patients were prospectively randomized into three groups. Group 1: sciatic nerve block; Group 2: posterior local anesthetic infiltration; Group 3: control. All patients received a femoral nerve block and spinal anesthesia. There were no differences in pain scores between groups. Sciatic nerve block provided a brief clinically insignificant opioid sparing effect. We conclude that sciatic nerve block and posterior local anesthetic infiltration do not provide significant analgesic benefits. Copyright © 2014 Elsevier Inc. All rights reserved.
Lynch, Maria Isabel; Malagueño, Elizabeth; Lynch, Luiz Felipe; Ferreira, Silvana; Stheling, Raphael; Oréfice, Fernando
2009-09-01
Toxoplasma gondii causes posterior uveitis and the specific diagnosis is based on clinical criteria. The presence of anti-T. gondii secretory IgA (sIgA) antibodies in patients' tears has been reported and an association was found between ocular toxoplasmosis and the anti-T. gondii sIgA isotype in Brazilian patients. The purpose of this study was to provide an objective validation of the published ELISA test for determining the presence of anti-T. gondii sIgA in the tears of individuals with ocular toxoplasmosis. Tears from 156 patients with active posterior uveitis were analysed; 82 of them presented characteristics of ocular toxoplasmosis (standard lesion) and 74 patients presented uveitis due to other aetiologies. Cases of active posterior uveitis were considered standard when a new inflammatory focus satellite to old retinochoroidal scars was observed. The determination of anti-T. gondii sIgA was made using an ELISA test with crude tachyzoite antigenic extracts. Tears were collected without previous stimulation. Detection of sIgA showed 65.9% sensitivity (95% CI = 54.5-74.4), 71.6% specificity (95% CI = 59.8-81.2), a positive predictive value of 72% (95% CI = 60.3-81.5) and a negative predictive value of 65.4% (95% CI = 54.0-75.4). sIgA reactivity was higher in the tears of patients with active posterior uveitis due to T. gondii (p < 0.05). The test is useful for differentiating active posterior uveitis due to toxoplasmosis from uveitis caused by other diseases.
Ishibashi, Hiroki; Miyamoto, Morikazu; Shinnmoto, Hiroshi; Murakami, Wakana; Soyama, Hiroaki; Nakatsuka, Masaya; Natsuyama, Takahiro; Yoshida, Masashi; Takano, Masashi; Furuya, Kenichi
2017-10-01
The aim of this study was to prenatally predict placenta accreta in posterior placenta previa using magnetic resonance imaging (MRI). This retrospective study was approved by the Institutional Review Board of our hospital. We identified 81 patients with singleton pregnancy who had undergone cesarean section due to posterior placenta previa at our hospital between January 2012 and December 2016. We calculated the sensitivity and specificity of several well-known findings, and of cervical varicosities quantified using magnetic resonance imaging, in predicting placenta accreta in posterior placenta previa. To quantify cervical varicosities, we calculated the A/B ratio, where "A" was the minimum distance from the most dorsal cervical varicosity to the deciduous placenta, and "B" was the minimum distance from the most dorsal cervical varicosity to the amniotic placenta. The appropriate cut-off value of the A/B ratio was determined using a receiver operating characteristic (ROC) curve. Three patients (3.7%) were diagnosed as having placenta accreta. The sensitivity and specificity of the well-known findings were 0 and 97.4%, respectively. Furthermore, the A/B ratio ranged from 0.02 to 0.79. ROC curve analysis revealed that the area under the combined placenta accreta and A/B ratio curve was 0.96. When the cutoff value of the A/B ratio was set 0.18, the sensitivity and specificity were 100 and 91%, respectively. It was difficult to diagnose placenta accreta in the posterior placenta previa using the well-known findings. The quantification of cervical varicosities could effectively predict placenta accreta.
Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.
2016-01-01
This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.
A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.
Suk, Heung-Il; Lee, Seong-Whan
2013-02-01
As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.
Large Scale Crop Classification in Ukraine using Multi-temporal Landsat-8 Images with Missing Data
NASA Astrophysics Data System (ADS)
Kussul, N.; Skakun, S.; Shelestov, A.; Lavreniuk, M. S.
2014-12-01
At present, there are no globally available Earth observation (EO) derived products on crop maps. This issue is being addressed within the Sentinel-2 for Agriculture initiative where a number of test sites (including from JECAM) participate to provide coherent protocols and best practices for various global agriculture systems, and subsequently crop maps from Sentinel-2. One of the problems in dealing with optical images for large territories (more than 10,000 sq. km) is the presence of clouds and shadows that result in having missing values in data sets. In this abstract, a new approach to classification of multi-temporal optical satellite imagery with missing data due to clouds and shadows is proposed. First, self-organizing Kohonen maps (SOMs) are used to restore missing pixel values in a time series of satellite imagery. SOMs are trained for each spectral band separately using non-missing values. Missing values are restored through a special procedure that substitutes input sample's missing components with neuron's weight coefficients. After missing data restoration, a supervised classification is performed for multi-temporal satellite images. For this, an ensemble of neural networks, in particular multilayer perceptrons (MLPs), is proposed. Ensembling of neural networks is done by the technique of average committee, i.e. to calculate the average class probability over classifiers and select the class with the highest average posterior probability for the given input sample. The proposed approach is applied for large scale crop classification using multi temporal Landsat-8 images for the JECAM test site in Ukraine [1-2]. It is shown that ensemble of MLPs provides better performance than a single neural network in terms of overall classification accuracy and kappa coefficient. The obtained classification map is also validated through estimated crop and forest areas and comparison to official statistics. 1. A.Yu. Shelestov et al., "Geospatial information system for agricultural monitoring," Cybernetics Syst. Anal., vol. 49, no. 1, pp. 124-132, 2013. 2. J. Gallego et al., "Efficiency Assessment of Different Approaches to Crop Classification Based on Satellite and Ground Observations," J. Autom. Inform. Scie., vol. 44, no. 5, pp. 67-80, 2012.
Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-05-29
Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less
Mashimo, Yuta; Fukui, Makiko; Machida, Ryuichiro
2016-11-01
The egg structure of Paterdecolyus yanbarensis was examined using light, scanning electron and transmission electron microscopy. The egg surface shows a distinct honeycomb pattern formed by exochorionic ridges. Several micropyles are clustered on the ventral side of the egg. The egg membrane is composed of an exochorion penetrated with numerous aeropyles, an endochorion, and an extremely thin vitelline membrane. The endochorion is thickened at the posterior egg pole, probably associated with water absorption. A comparison of egg structure among Orthoptera revealed that the micropylar distribution pattern is conserved in Ensifera and Caelifera and might be regarded as a groundplan feature for each group; in Ensifera, multiple micropyles are clustered on the ventral side of the egg, whereas in Caelifera, micropyles are arranged circularly around the posterior pole of the egg. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Granade, Christopher; Wiebe, Nathan
2017-08-01
A major challenge facing existing sequential Monte Carlo methods for parameter estimation in physics stems from the inability of existing approaches to robustly deal with experiments that have different mechanisms that yield the results with equivalent probability. We address this problem here by proposing a form of particle filtering that clusters the particles that comprise the sequential Monte Carlo approximation to the posterior before applying a resampler. Through a new graphical approach to thinking about such models, we are able to devise an artificial-intelligence based strategy that automatically learns the shape and number of the clusters in the support of the posterior. We demonstrate the power of our approach by applying it to randomized gap estimation and a form of low circuit-depth phase estimation where existing methods from the physics literature either exhibit much worse performance or even fail completely.
The right parietal cortex and time perception: back to Critchley and the Zeitraffer phenomenon.
Alexander, Iona; Cowey, Alan; Walsh, Vincent
2005-05-01
We investigated the involvement of the posterior parietal cortex in time perception by temporarily disrupting normal functioning in this region, in subjects making prospective judgements of time or pitch. Disruption of the right posterior parietal cortex significantly slowed reaction times when making time, but not pitch, judgements. Similar interference with the left parietal cortex and control stimulation over the vertex did not significantly change performance on either pitch or time tasks. The results show that the information processing necessary for temporal judgements involves the parietal cortex, probably to optimise spatiotemporal accuracy in voluntary action. The results are in agreement with a recent neuroimaging study and are discussed with regard to a psychological model of temporal processing and a recent proposal that time is part of a parietal cortex system for encoding magnitude information relevant for action.
NASA Astrophysics Data System (ADS)
Escriba, P. A.; Callado, A.; Santos, D.; Santos, C.; Simarro, J.; García-Moya, J. A.
2009-09-01
At 00 UTC 24 January 2009 an explosive ciclogenesis originated over the Atlantic Ocean reached its maximum intensity with observed surface pressures lower than 970 hPa on its center and placed at Gulf of Vizcaya. During its path through southern France this low caused strong westerly and north-westerly winds over the Iberian Peninsula higher than 150 km/h at some places. These extreme winds leaved 10 casualties in Spain, 8 of them in Catalonia. The aim of this work is to show whether exists an added value in the short range prediction of the 24 January 2009 strong winds when using the Short Range Ensemble Prediction System (SREPS) of the Spanish Meteorological Agency (AEMET), with respect to the operational forecasting tools. This study emphasizes two aspects of probabilistic forecasting: the ability of a 3-day forecast of warn an extreme windy event and the ability of quantifying the predictability of the event so that giving value to deterministic forecast. Two type of probabilistic forecasts of wind are carried out, a non-calibrated and a calibrated one using Bayesian Model Averaging (BMA). AEMET runs daily experimentally SREPS twice a day (00 and 12 UTC). This system consists of 20 members that are constructed by integrating 5 local area models, COSMO (COSMO), HIRLAM (HIRLAM Consortium), HRM (DWD), MM5 (NOAA) and UM (UKMO), at 25 km of horizontal resolution. Each model uses 4 different initial and boundary conditions, the global models GFS (NCEP), GME (DWD), IFS (ECMWF) and UM. By this way it is obtained a probabilistic forecast that takes into account the initial, the contour and the model errors. BMA is a statistical tool for combining predictive probability functions from different sources. The BMA predictive probability density function (PDF) is a weighted average of PDFs centered on the individual bias-corrected forecasts. The weights are equal to posterior probabilities of the models generating the forecasts and reflect the skill of the ensemble members. Here BMA is applied to provide probabilistic forecasts of wind speed. In this work several forecasts for different time ranges (H+72, H+48 and H+24) of 10 meters wind speed over Catalonia are verified subjectively at one of the instants of maximum intensity, 12 UTC 24 January 2009. On one hand, three probabilistic forecasts are compared, ECMWF EPS, non-calibrated SREPS and calibrated SREPS. On the other hand, the relationship between predictability and skill of deterministic forecast is studied by looking at HIRLAM 0.16 deterministic forecasts of the event. Verification is focused on location and intensity of 10 meters wind speed and 10-minutal measures from AEMET automatic ground stations are used as observations. The results indicate that SREPS is able to forecast three days ahead mean winds higher than 36 km/h and that correctly localizes them with a significant probability of ocurrence in the affected area. The probability is higher after BMA calibration of the ensemble. The fact that probability of strong winds is high allows us to state that the predictability of the event is also high and, as a consequence, deterministic forecasts are more reliable. This is confirmed when verifying HIRLAM deterministic forecasts against observed values.
A Dirichlet-Multinomial Bayes Classifier for Disease Diagnosis with Microbial Compositions.
Gao, Xiang; Lin, Huaiying; Dong, Qunfeng
2017-01-01
Dysbiosis of microbial communities is associated with various human diseases, raising the possibility of using microbial compositions as biomarkers for disease diagnosis. We have developed a Bayes classifier by modeling microbial compositions with Dirichlet-multinomial distributions, which are widely used to model multicategorical count data with extra variation. The parameters of the Dirichlet-multinomial distributions are estimated from training microbiome data sets based on maximum likelihood. The posterior probability of a microbiome sample belonging to a disease or healthy category is calculated based on Bayes' theorem, using the likelihood values computed from the estimated Dirichlet-multinomial distribution, as well as a prior probability estimated from the training microbiome data set or previously published information on disease prevalence. When tested on real-world microbiome data sets, our method, called DMBC (for Dirichlet-multinomial Bayes classifier), shows better classification accuracy than the only existing Bayesian microbiome classifier based on a Dirichlet-multinomial mixture model and the popular random forest method. The advantage of DMBC is its built-in automatic feature selection, capable of identifying a subset of microbial taxa with the best classification accuracy between different classes of samples based on cross-validation. This unique ability enables DMBC to maintain and even improve its accuracy at modeling species-level taxa. The R package for DMBC is freely available at https://github.com/qunfengdong/DMBC. IMPORTANCE By incorporating prior information on disease prevalence, Bayes classifiers have the potential to estimate disease probability better than other common machine-learning methods. Thus, it is important to develop Bayes classifiers specifically tailored for microbiome data. Our method shows higher classification accuracy than the only existing Bayesian classifier and the popular random forest method, and thus provides an alternative option for using microbial compositions for disease diagnosis.
Evidence from tooth surface morphology for a posterior maxillary origin of the proteroglyph gang
Jackson, K.; Fritts, T.H.
1995-01-01
Although the front-fanged venom delivery system of the Elapidae is believed to be derived from an aglyphous or opisthoglyphous colubroid ancestor, opinion is divided as to the end of the maxilla on which the proteroglyph fang originated. This study was undertaken to determine whether the evolutionary precursor of the proteroglyph fang was (a) a grooved posterior fang which migrated anteriorly, or (b) an enlarged anterior tooth which secondarily developed a groove for the conduction of venom. The surface morphology of the maxillary teeth of colubrid genera was examined using scanning electron microscopy. Ridges present on the lingual and labial surfaces of anterior maxillary teeth and on the anterior and posterior surfaces of posterior maxillary teeth were identified as morphological markers of potential value in distinguishing the anterior and posterior maxillary teeth of colubrid snakes, and in determining the origin of the proteroglyph fang. Patterns of ridges on the surfaces of elapid fangs examined were found to be consistent with the hypothesis that the evolutionary precursor of the proteroglyph fang was an opisthoglyph fang which migrated anteriorly.
The improved business valuation model for RFID company based on the community mining method.
Li, Shugang; Yu, Zhaoxu
2017-01-01
Nowadays, the appetite for the investment and mergers and acquisitions (M&A) activity in RFID companies is growing rapidly. Although the huge number of papers have addressed the topic of business valuation models based on statistical methods or neural network methods, only a few are dedicated to constructing a general framework for business valuation that improves the performance with network graph (NG) and the corresponding community mining (CM) method. In this study, an NG based business valuation model is proposed, where real options approach (ROA) integrating CM method is designed to predict the company's net profit as well as estimate the company value. Three improvements are made in the proposed valuation model: Firstly, our model figures out the credibility of the node belonging to each community and clusters the network according to the evolutionary Bayesian method. Secondly, the improved bacterial foraging optimization algorithm (IBFOA) is adopted to calculate the optimized Bayesian posterior probability function. Finally, in IBFOA, bi-objective method is used to assess the accuracy of prediction, and these two objectives are combined into one objective function using a new Pareto boundary method. The proposed method returns lower forecasting error than 10 well-known forecasting models on 3 different time interval valuing tasks for the real-life simulation of RFID companies.
Estimators of The Magnitude-Squared Spectrum and Methods for Incorporating SNR Uncertainty
Lu, Yang; Loizou, Philipos C.
2011-01-01
Statistical estimators of the magnitude-squared spectrum are derived based on the assumption that the magnitude-squared spectrum of the noisy speech signal can be computed as the sum of the (clean) signal and noise magnitude-squared spectra. Maximum a posterior (MAP) and minimum mean square error (MMSE) estimators are derived based on a Gaussian statistical model. The gain function of the MAP estimator was found to be identical to the gain function used in the ideal binary mask (IdBM) that is widely used in computational auditory scene analysis (CASA). As such, it was binary and assumed the value of 1 if the local SNR exceeded 0 dB, and assumed the value of 0 otherwise. By modeling the local instantaneous SNR as an F-distributed random variable, soft masking methods were derived incorporating SNR uncertainty. The soft masking method, in particular, which weighted the noisy magnitude-squared spectrum by the a priori probability that the local SNR exceeds 0 dB was shown to be identical to the Wiener gain function. Results indicated that the proposed estimators yielded significantly better speech quality than the conventional MMSE spectral power estimators, in terms of yielding lower residual noise and lower speech distortion. PMID:21886543
The improved business valuation model for RFID company based on the community mining method
Li, Shugang; Yu, Zhaoxu
2017-01-01
Nowadays, the appetite for the investment and mergers and acquisitions (M&A) activity in RFID companies is growing rapidly. Although the huge number of papers have addressed the topic of business valuation models based on statistical methods or neural network methods, only a few are dedicated to constructing a general framework for business valuation that improves the performance with network graph (NG) and the corresponding community mining (CM) method. In this study, an NG based business valuation model is proposed, where real options approach (ROA) integrating CM method is designed to predict the company’s net profit as well as estimate the company value. Three improvements are made in the proposed valuation model: Firstly, our model figures out the credibility of the node belonging to each community and clusters the network according to the evolutionary Bayesian method. Secondly, the improved bacterial foraging optimization algorithm (IBFOA) is adopted to calculate the optimized Bayesian posterior probability function. Finally, in IBFOA, bi-objective method is used to assess the accuracy of prediction, and these two objectives are combined into one objective function using a new Pareto boundary method. The proposed method returns lower forecasting error than 10 well-known forecasting models on 3 different time interval valuing tasks for the real-life simulation of RFID companies. PMID:28459815
NASA Astrophysics Data System (ADS)
Morton, Timothy D.; Bryson, Stephen T.; Coughlin, Jeffrey L.; Rowe, Jason F.; Ravichandran, Ganesh; Petigura, Erik A.; Haas, Michael R.; Batalha, Natalie M.
2016-05-01
We present astrophysical false positive probability calculations for every Kepler Object of Interest (KOI)—the first large-scale demonstration of a fully automated transiting planet validation procedure. Out of 7056 KOIs, we determine that 1935 have probabilities <1% of being astrophysical false positives, and thus may be considered validated planets. Of these, 1284 have not yet been validated or confirmed by other methods. In addition, we identify 428 KOIs that are likely to be false positives, but have not yet been identified as such, though some of these may be a result of unidentified transit timing variations. A side product of these calculations is full stellar property posterior samplings for every host star, modeled as single, binary, and triple systems. These calculations use vespa, a publicly available Python package that is able to be easily applied to any transiting exoplanet candidate.
Prospect evaluation as a function of numeracy and probability denominator.
Millroth, Philip; Juslin, Peter
2015-05-01
This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.
Othenin-Girard, V; Boulvain, M; Guittier, M-J
2018-02-01
To describe the maternal and foetal outcomes of an occiput posterior foetal position at delivery; to evaluate predictive factors of anterior rotation during labour. Descriptive retrospective analysis of a cohort of 439 women with foetuses in occiput posterior position during labour. Logistic regression analysis to quantify the effect of factors that may favour anterior rotation. Most of foetuses (64%) do an anterior rotation during labour and 13% during the expulsive phase. The consequences of a persistent foetal occiput posterior position during delivery are a significantly increased average time of second stage labour compared to others positions (65.19minutes vs. 43.29, P=0.001, respectively); a higher percentage of caesarean sections (72.0% versus 4.7%, P<0.001) and instrumental delivery (among low-birth deliveries, 60.7% versus 25.2%, P<0.001); more frequent third-degree perineal tears (14.3% vs. 0.6%, P<0.001) and more abundant blood loss (560mL versus 344mL, P<0.001). In a multi-variable model including nulliparity, station of the presenting part and degree of flexion of the foetal head at complete dilatation, the only predictive factor independent of rotation at delivery is a good flexion of the foetal head at complete dilatation, which multiplies the anterior rotation probability by six. A good flexion of the foetal head is significantly associated with anterior rotation. Other studies exploring ways to increase anterior rotation during labour are needed to reduce the very high risk of caesarean section and instrumentation associated with the foetal occiput posterior position. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Tekçe, Neslihan; Pala, Kansad; Demirci, Mustafa; Tuncer, Safa
2016-11-01
To evaluate changes in surface characteristics of two different resin composites after 1 year of water storage using a profilometer, Vickers hardness, scanning electron microscopy (SEM), and atomic force microscopy (AFM). A total of 46 composite disk specimens (10 mm in diameter and 2 mm thick) were fabricated using Clearfil Majesty Esthetic and Clearfil Majesty Posterior (Kuraray Medical Co, Tokyo, Japan). Ten specimens from each composite were used for surface roughness and microhardness tests (n = 10). For each composite, scanning electron microscope (SEM, n = 2) and atomic force microscope (AFM, n = 1) images were obtained after 24 h and 1 year of water storage. The data were analyzed using two-way analysis of variance and a post-hoc Bonferroni test. Microhardness values of Clearfil Majesty Esthetic decreased significantly (78.15-63.74, p = 0.015) and surface roughness values did not change after 1 year of water storage (0.36-0.39, p = 0.464). Clearfil Majesty Posterior microhardness values were quite stable (138.74-137.25, p = 0.784), and surface roughness values increased significantly (0.39-0.48, p = 0.028) over 1 year. One year of water storage caused microhardness values for Clearfil Majesty Esthetic to decrease and the surface roughness of Clearfil Majesty Posterior increased. AFM and SEM images demonstrated surface detoration of the materials after 1 year and ensured similar results with the quantitative test methods. SCANNING 38:694-700, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Posterior corneal curvature changes following Refractive Small Incision Lenticule Extraction
Ganesh, Sri; Patel, Utsav; Brar, Sheetal
2015-01-01
Purpose To compare the posterior corneal curvature changes, in terms of corneal power and asphercity, following Refractive Small Incision Lenticule Extraction (ReLEx SMILE) procedure for low, moderate, and high myopia. Methods This retrospective, non randomized, comparative, interventional trial; included 52 eyes of 26 patients, divided in three groups: low myopia (myopia ≤3 D [diopters] spherical equivalent [SE]), moderate myopia (myopia >3 D and <6 D SE), and high myopia (myopia ≥6 D SE). All patients were treated for myopia and myopic astigmatism using ReLEx SMILE. The eyes were examined pre-operatively and 3 months post-operatively using SCHWIND SIRIUS, a three-dimensional rotating Scheimpflug camera with a Placido disc topographer to assess corneal changes with regard to keratometric power and asphericity of the cornea. Results A statistically significant increase in mean keratometric power in the 3, 5, and 7 mm zones of the posterior corneal surface compared with its pre-ReLEx SMILE value was detected after 3 months in the moderate myopia group (pre-operative [pre-op] −6.14±0.23, post-operative [post-op] −6.29±0.22, P<0.001) and high myopia group (pre-op −6.19±0.16, post-op −6.4±0.18, P<0.001), but there was no significant change in keratometric power of the posterior surface in the low myopia group (pre-op −5.87±0.17, post-op −6.06±0.29, P=0.143). Asphericity (Q-value) of the posterior surface changed significantly (P<0.001) after ReLEx SMILE in the moderate myopia group in the 3, 5, and 7 mm zones, and in the high myopia group in the 3 and 7 mm zones; but there was no significant change in the Q-value in the low myopia group in all three zones (pre-op 0.23±0.43, post-op −0.40±0.71, P=0.170), and in the high myopia group in the 5 mm zone (P=0.228). Conclusion ReLEx SMILE causes significant changes in posterior corneal keratometric power and asphericity in moderate and high myopia, but the effect is subtle and insignificant in low myopia. PMID:26229428
Hu, Min; Hu, Haoyu; Cai, Wei; Mo, Zhikang; Xiang, Nan; Yang, Jian; Fang, Chihua
2018-05-01
Hepatectomy is the optimal method for liver cancer; the virtual liver resection based on three-dimensional visualization technology (3-DVT) could provide better preoperative strategy for surgeon. We aim to introduce right posterior lobe allied with part of V and VIII sectionectomy assisted by 3-DVT as a promising treatment for massive or multiple right hepatic malignancies to retain maximum residual liver volume on the basis of R0 resection. Among 126 consecutive patients who underwent hepatectomy, 9 (7%) underwent right posterior lobe allied with part of V and VIII sectionectomy. 21 (17%) underwent right hemihepatectomy (RH). The virtual RH was performed with 3-DVT, which provided better observation of spatial position relationship between tumor and vessels, and the more accurate estimation of the remnant liver volume. If remnant liver volume was <40%, right posterior lobe allied with part of V and VIII sectionectomy should be undergone. Then, the precut line ought to be planned on the basis of protecting the portal branch of subsegment 5 and 8. The postoperative outcome of patients was compared before and after propensity score matching. Nine patients meeting the eligibility criteria received right posterior lobe allied with part of V and VIII sectionectomy. The variables, including the overall mean operation time, blood transfusion, operation length, liver function, and postoperative complications, were similar between two groups before and after propensity matching. The postoperative first, third, fifth, and seventh days mean value of aspartate aminotransferase (AST), alanine aminotransferase (ALT), albumin (ALB), and total bilirubin had no significant difference compared with preoperative value. One patient in each group had recurrence six months after surgery. Right posterior lobe allied with part of V and VIII sectionectomy based on 3-DVT is safe and feasible surgery way, and can be a very promising method in massive or multiple right hepatic malignancy therapy.
Variational Gaussian approximation for Poisson data
NASA Astrophysics Data System (ADS)
Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen
2018-02-01
The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.
Liesenjohann, Thilo; Neuhaus, Birger; Schmidt-Rhaesa, Andreas
2006-08-01
The anterior and posterior head sensory organs of Dactylopodola baltica (Macrodasyida, Gastrotricha) were investigated by transmission electron microscopy (TEM). In addition, whole individuals were labeled with phalloidin to mark F-actin and with anti-alpha-tubulin antibodies to mark microtubuli and studied with confocal laser scanning microscopy. Immunocytochemistry reveals that the large number of ciliary processes in the anterior head sensory organ contain F-actin; no signal could be detected for alpha-tubulin. Labeling with anti-alpha-tubulin antibodies revealed that the anterior and posterior head sensory organs are innervated by a common stem of nerves from the lateral nerve cords just anterior of the dorsal brain commissure. TEM studies showed that the anterior head sensory organ is composed of one sheath cell and one sensory cell with a single branching cilium that possesses a basal inflated part and regularly arranged ciliary processes. Each ciliary process contains one central microtubule. The posterior head sensory organ consists of at least one pigmented sheath cell and several probably monociliary sensory cells. Each cilium branches into irregularly arranged ciliary processes. These characters are assumed to belong to the ground pattern of the Gastrotricha. Copyright 2006 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.
2012-02-01
This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.
Nagai, Kanto; Muratsu, Hirotsugu; Takeoka, Yoshiki; Tsubosaka, Masanori; Kuroda, Ryosuke; Matsumoto, Tomoyuki
2017-10-01
During modified gap-balancing technique, there is no consensus on the best method for obtaining appropriate soft-tissue balance and determining the femoral component rotation. Sixty-five varus osteoarthritic patients underwent primary posterior-stabilized total knee arthroplasty using modified gap-balancing technique. The influence of joint distraction force on the soft-tissue balance measurement during the modified gap-balancing technique was evaluated with Offset Repo-Tensor between the osteotomized surfaces at extension, and between femoral posterior condyles and tibial osteotomized surface at flexion of the knee before the resection of femoral posterior condyles. The joint center gap (millimeters) and varus ligament balance (°) were measured under 20, 40, and 60 pounds of joint distraction forces, and the differences in these values at extension and flexion (the value at flexion minus the value at extension) were also calculated. The differences in joint center gap (-6.7, -6.8, and -6.9 mm for 20, 40, and 60 pounds, respectively) and varus ligament balance (3.5°, 3.8°, and 3.8°) at extension and flexion were not significantly different among different joint distraction forces, although the joint center gap and varus ligament balance significantly increased stepwise at extension and flexion as the joint distraction force increased. The difference in joint center gap and varus ligament balance at extension and flexion were consistent even among the different joint distraction forces. This novel index would be useful for the determination of femoral component rotation during the modified gap-balancing technique. Copyright © 2017 Elsevier Inc. All rights reserved.
Models based on value and probability in health improve shared decision making.
Ortendahl, Monica
2008-10-01
Diagnostic reasoning and treatment decisions are a key competence of doctors. A model based on values and probability provides a conceptual framework for clinical judgments and decisions, and also facilitates the integration of clinical and biomedical knowledge into a diagnostic decision. Both value and probability are usually estimated values in clinical decision making. Therefore, model assumptions and parameter estimates should be continually assessed against data, and models should be revised accordingly. Introducing parameter estimates for both value and probability, which usually pertain in clinical work, gives the model labelled subjective expected utility. Estimated values and probabilities are involved sequentially for every step in the decision-making process. Introducing decision-analytic modelling gives a more complete picture of variables that influence the decisions carried out by the doctor and the patient. A model revised for perceived values and probabilities by both the doctor and the patient could be used as a tool for engaging in a mutual and shared decision-making process in clinical work.
RadVel: General toolkit for modeling Radial Velocities
NASA Astrophysics Data System (ADS)
Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan
2018-01-01
RadVel models Keplerian orbits in radial velocity (RV) time series. The code is written in Python with a fast Kepler's equation solver written in C. It provides a framework for fitting RVs using maximum a posteriori optimization and computing robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel can perform Bayesian model comparison and produces publication quality plots and LaTeX tables.
Scanning-slit topography in patients with keratoconus.
Módis, László; Németh, Gábor; Szalai, Eszter; Flaskó, Zsuzsa; Seitz, Berthold
2017-01-01
To evaluate the anterior and posterior corneal surfaces using scanning-slit topography and to determine the diagnostic ability of the measured corneal parameters in keratoconus. Orbscan II measurements were taken in 39 keratoconic corneas previously diagnosed by corneal topography and in 39 healthy eyes. The central minimum, maximum, and astigmatic simulated keratometry (K) and anterior axial power values were determined. Spherical and cylindrical mean power diopters were obtained at the central and at the steepest point of the cornea both on anterior and on posterior mean power maps. Pachymetry evaluations were taken at the center and paracentrally in the 3 mm zone from the center at a location of every 45 degrees. Receiver operating characteristic (ROC) analysis was used to determine the best cut-off values and to evaluate the utility of the measured parameters in identifying patients with keratoconus. The minimum, maximum and astigmatic simulated K readings were 44.80±3.06 D, 47.17±3.67 D and 2.42±1.84 D respectively in keratoconus patients and these values differed significantly ( P <0.0001 for all comparisons) from healthy subjects. For all pachymetry measurements and for anterior and posterior mean power values significant differences were found between the two groups. Moreover, anterior central cylindrical power had the best discrimination ability (area under the ROC curve=0.948). The results suggest that scanning-slit topography and pachymetry are accurate methods both for keratoconus screening and for confirmation of the diagnosis.
Anterior segment parameters of rabbits with rotating Scheimpflug camera.
Yüksel, Harun; Türkcü, Fatih M; Ari, Şeyhmus; Çinar, Yasin; Cingü, Abdullah K; Şahin, Muhammed; Şahin, Alparslan; Özkurt, Zeynep; Çaça, İhsan
2015-05-01
Rabbit is one of the most commonly used experimental animals for corneal studies due to similarity of size to human cornea and ease of manipulation. In this study, we assessed anterior segment parameters of the healthy rabbit eyes with Pentacam HR (Oculus, Wetzlar, Germany). Six-month-old, approximately 2.5-3 kg weighted, 30 female New Zealand rabbits were used in the study. Right eye of the each rabbit was imaged with Pentacam HR under intramuscular ketamine hydrochloride (Ketalar; Eczacibasi, Turkey) anesthesia (50 mg/kg). After the imaging, the rabbits with blinking errors, which results in low-quality images, were excluded from the study. Keratometric readings, central corneal thickness (CCT), anterior chamber depth (ACD), and anterior and posterior elevation values, and lens density were noted. In this study, the flattest and the steepest keratometric values were found as 43.34 ± 1.86, 42.7 ± 2.0, and 43.9 ± 1.9 diopters, respectively. The mean CCT and ACD of rabbits were found as 388 ± 39 μm and 2.08 ± 0.16 mm, respectively. Mean of the anterior and posterior elevation at thinnest point was found as 1.29 ± 4.28 and 3.91 ± 6.17 μm, respectively. Keratometric readings and anterior and posterior elevation values of rabbits were similar to human; however, corneal thickness and anterior chamber depth (ACD) values were lower than humans. © 2014 American College of Veterinary Ophthalmologists.
Optimal Post-Operative Immobilisation for Supracondylar Humeral Fractures.
Azzolin, Lucas; Angelliaume, Audrey; Harper, Luke; Lalioui, Abdelfettah; Delgove, Anaïs; Lefèvre, Yan
2018-05-25
Supracondylar humeral fractures (SCHFs) are very common in paediatric patients. In France, percutaneous fixation with two lateral-entry pins is widely used after successful closed reduction. Post-operative immobilisation is typically with a long arm cast combined with a tubular-bandage sling that immobilises the shoulder and holds the arm in adduction and internal rotation to prevent external rotation of the shoulder, which might cause secondary displacement. The objective of this study was to compare this standard immobilisation technique to a posterior plaster splint with a simple sling. Secondary displacement is not more common with a posterior plaster splint and sling than with a long arm cast. 100 patients with extension Gartland type III SCHFs managed by closed reduction and percutaneous fixation with two lateral-entry pins between December 2011 and December 2015 were assessed retrospectively. Post-operative immobilisation was with a posterior plaster splint and a simple sling worn for 4 weeks. Radiographs were obtained on days 1, 45, and 90. Secondary displacement occurred in 8% of patients. No patient required revision surgery. The secondary displacement rate was comparable to earlier reports. Of the 8 secondary displacements, 5 were ascribable to technical errors. The remaining 3 were not caused by rotation of the arm and would probably not have been prevented by using the tubular-bandage sling. A posterior plaster splint combined with a simple sling is a simple and effective immobilisation method for SCHFs provided internal fixation is technically optimal. IV, retrospective observational study. Copyright © 2018. Published by Elsevier Masson SAS.
Counsell, Serena J; Shen, Yuji; Boardman, James P; Larkman, David J; Kapellou, Olga; Ward, Philip; Allsop, Joanna M; Cowan, Frances M; Hajnal, Joseph V; Edwards, A David; Rutherford, Mary A
2006-02-01
Diffuse excessive high signal intensity (DEHSI) is observed in the majority of preterm infants at term-equivalent age on conventional MRI, and diffusion-weighted imaging has shown that apparent diffusion coefficient values are elevated in the white matter (WM) in DEHSI. Our aim was to obtain diffusion tensor imaging on preterm infants at term-equivalent age and term control infants to test the hypothesis that radial diffusivity was significantly different in the WM in preterm infants with DEHSI compared with both preterm infants with normal-appearing WM on conventional MRI and term control infants. Diffusion tensor imaging was obtained on 38 preterm infants at term-equivalent age and 8 term control infants. Values for axial (lambda1) and radial [(lambda2 + lambda3)/2] diffusivity were calculated in regions of interest positioned in the central WM at the level of the centrum semiovale, frontal WM, posterior periventricular WM, occipital WM, anterior and posterior portions of the posterior limb of the internal capsule, and the genu and splenium of the corpus callosum. Radial diffusivity was elevated significantly in the posterior portion of the posterior limb of the internal capsule and the splenium of the corpus callosum, and both axial and radial diffusivity were elevated significantly in the WM at the level of the centrum semiovale, the frontal WM, the periventricular WM, and the occipital WM in preterm infants with DEHSI compared with preterm infants with normal-appearing WM and term control infants. There was no significant difference between term control infants and preterm infants with normal-appearing WM in any region studied. These findings suggest that DEHSI represents an oligodendrocyte and/or axonal abnormality that is widespread throughout the cerebral WM.
Xu, Zhihong; Chen, Dongyang; Shi, Dongquan; Dai, Jin; Yao, Yao; Jiang, Qing
2016-03-01
Hypoplasia of the lateral femoral condyle has been reported in discoid lateral meniscus patients, but associated imaging findings in the axial plane have not been characterized. In this study, we aimed to identify differences in the lateral femoral condyle between patients with discoid lateral meniscus and those with normal menisci using axial MRI images. Twenty-three patients (24 knees) with complete discoid lateral meniscus, 43 (45 knees) with incomplete discoid lateral meniscus, and 50 with normal menisci (50 knees) were enrolled and distributed into three groups. Two new angles, posterior lateral condylar angle (PLCA) and posterior medial condylar angle (PMCA), were measured on axial MRI images; the posterior condylar angle (PCA) was also measured. Differences between the three groups in the PLCA, PMCA, PCA, and PLCA/PMCA were analysed. The predictive value of PLCA and PLCA/PMCA for complete discoid lateral meniscus was assessed. In the complete discoid lateral meniscus group, PLCA and PLCA/PMCA were significantly smaller compared with the normal meniscus group and the incomplete discoid lateral meniscus group (P < 0.001). A significantly larger PCA was identified in the complete discoid lateral meniscus group compared with the incomplete discoid lateral meniscus group (P < 0.05) and normal meniscus group (P < 0.05). Both PLCA and PLCA/PMCA showed excellent predictive value for complete discoid lateral meniscus. Hypoplasia of the posterior lateral femoral condyle is typically seen in patients with complete discoid lateral meniscus. PLCA and PLCA/PMCA can be measured from axial MRI images and used as excellent predictive parameters for complete discoid lateral meniscus. Diagnostic study, Level III.
Thiarawat, Peeraphong; Jahromi, Behnam Rezai; Kozyrev, Danil A; Intarakhao, Patcharin; Teo, Mario K; Choque-Velasquez, Joham; Niemelä, Mika; Hernesniemi, Juha
2018-05-21
Fetal-type posterior cerebral arteries (F-PCAs) might result in alterations in hemodynamic flow patterns and may predispose an individual to an increased risk of posterior communicating artery aneurysms (PCoAAs). To determine the association between PCoAAs and the presence of ipsilateral F-PCAs. We retrospectively reviewed the radiographic findings from 185 patients harboring 199 PCoAAs that were treated at our institution between 2005 and 2015. Our study population consisted of 4 cohorts: (A) patients with 171 internal carotid arteries (ICAs) harboring unilateral PCoAAs; (B) 171 unaffected ICAs in the same patients from the first group; (C) 28 ICAs of 14 patients with bilateral PCoAAs; and (D) 180 ICAs of 90 patients with aneurysms in other locations. We then determined the presence of ipsilateral F-PCAs and recorded all aneurysm characteristics. Group A had the highest prevalence of F-PCAs (42%) compared to 19% in group B, 3% in group C, and 14% in group D (odds ratio A : B = 3.041; A : C = 19.626; and A : D = 4.308; P < .001). PCoAAs were associated with larger diameters of the posterior communicating arteries (median value 1.05 vs 0.86 mm; P = .001). The presence of F-PCAs was associated with larger sizes of the aneurysm necks (median value 3.3 vs 3.0 mm; P = .02). PCoAAs were associated with a higher prevalence of ipsilateral F-PCAs. This variant was associated with larger sizes of the aneurysm necks but was not associated with the sizes of the aneurysm domes or with their rupture statuses.
[Correlation between axial length and corneal curvature and spherical aberration].
Wang, X J; Bao, Y Z
2017-04-11
Objective: To discuss the correlation between axial length and corneal curvature and corneal spherical aberration in a group of cataract patients with axial length greater than 24 mm. Methods: Retrospective case series. This study comprised 117 (234 eyes) age-related cataract patients. There were 51 men (43.59%) and 66 women (56.41%) with mean age of (69.0±8.7) years (range from 52.0 to 85.0 years). The average axial length was 27.6±1.8 (range from 24.2 to 31.9 mm). We devided them into four groups according to the axial length. A-scan was used to measure the axial length and Pentacam was used to get the corneal curvature and corneal spherical aberration of both anterior and posterior surface. kolmogorov-smirnov test was used to check the normal distribution. ANOVA test was used to compare eachcorneal parameter among different groups. Pearson correlation analysis was used to obtain the correlation of corneal parameters in groups. Results: There were correlations between the axial length and the anterior and posterior corneal curvature ( r=- 0.213, r= 0.174, respectively, P< 0.05). No correlation was found between the axial length and anterior or posterior corneal spherical aberration ( r=- 0.114, 0.055, respectively, P> 0.05). Mean values of corneal anterior surface curvature were (45.26±1.60) D (group 1), (44.17±1.45) D (group 2), (44.40±1.99)D (group 3), and (44.53±1.69) D (group 4) respectively. Mean values of corneal posterior surface curvature were(-6.57±0.26)D (group 1), ( - 6.40±0.24)D (group 2), ( - 6.41±0.38)D (group 3), and (-6.43±0.26)D (group 4) respectively. There were significant difference of corneal anterior and posterior surface curvature among 4 groups ( P= 0.004, P= 0.001). There was significant difference of corneal curvature of anterior surface in group 1 compared to group 2 and group 3( P< 0.01, P= 0.01). There was significant difference of curvature of posterior surface in group 1 compared to group 2 and group 3, respectively ( P< 0.01). Mean values of anterior surface corneal spherical aberration were (2.09±0.53) μm (group 1), (1.90±0.44) μm (group 2), (2.00±0.74) μm (group 3), and (1.78±0.52) μm (group 4) respectively. Mean values of posterior surface corneal spherical aberration were (2.69±1.15) μm (group 1), (2.46±1.16) μm (group 2), (2.92±2.51) μm (group 3), and (2.69±1.13) μm (group 4) respectively. No correlation was found in anterior and posterior surface corneal spherical aberration( P> 0.05) among different groups. Conclusions: The eye with a longer axial length have a flatter cornea. Cornea fails to compensate for axial length elongation when the axial length is longer than 28mm. The corneal spherical aberration varies among individuals, which suggests us to do the customized measurement before cataract surgery to make a decision on choosing the aspherical intraocular lens. (Chin J Ophthalmol, 2017, 53: 255-259) .
Addition of right-sided and posterior precordial leads during stress testing.
Shry, Eric A; Eckart, Robert E; Furgerson, James L; Stajduhar, Karl C; Krasuski, Richard A
2003-12-01
Exercise treadmill testing has limited sensitivity for the detection of coronary artery disease, frequently requiring the addition of imaging modalities to enhance the predictive value of the test. Recently, there has been interest in using nonstandard electrocardiographic (ECG) leads during exercise testing. We consecutively enrolled all patients undergoing exercise myocardial imaging with four additional leads recorded (V4R, V7, V8, and V9). The test characteristics of the 12-lead, the 15-lead (12-lead, V7, V8, V9), and the 16-lead (12-lead, V4R, V7, V8, V9) ECGs were compared with stress imaging in all patients. In the subset of patients who underwent angiography within 60 days of stress testing, these lead arrays were compared with the catheterization findings. There were 727 subjects who met entry criteria. The mean age was 58.5 +/- 12.3 years, and 366 (50.3%) were women. Pretest probability for disease was high in 241 (33.1%), intermediate in 347 (47.7%), and low in 139 (19.1%). A total of 166 subjects had an abnormal 12-lead ECG during exercise. The addition of 3 posterior leads to the standard 12-lead ECG resulted in 7 additional subjects having an abnormal electrocardiographic response to exercise. The addition of V4R resulted in only 1 additional patient having an abnormal ECG during exercise. The sensitivity of the ECG for detecting ischemia as determined by stress imaging was 36.6%, 39.2%, and 40.0% (P = NS) for the 12-lead, 15-lead, and 16-lead ECGs, respectively. In those with catheterization data (n = 123), the sensitivity for determining obstructive coronary artery disease was 43.5%, 45.2%, and 45.2% (P = NS) for the 12-lead, 15-lead, and 16-lead ECGs, respectively. The sensitivity of imaging modalities was 77.4% when compared with catheterization. In patients undergoing stress imaging studies, the addition of right-sided and posterior leads did not significantly increase the sensitivity of the ECG for the detection of myocardial ischemia. Additional leads should not be used to replace imaging modalities for the detection of coronary artery disease.
Bayesian anomaly detection in monitoring data applying relevance vector machine
NASA Astrophysics Data System (ADS)
Saito, Tomoo
2011-04-01
A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.
The preference of probability over negative values in action selection.
Neyedli, Heather F; Welsh, Timothy N
2015-01-01
It has previously been found that when participants are presented with a pair of motor prospects, they can select the prospect with the largest maximum expected gain (MEG). Many of those decisions, however, were trivial because of large differences in MEG between the prospects. The purpose of the present study was to explore participants' preferences when making non-trivial decisions between two motor prospects. Participants were presented with pairs of prospects that: 1) differed in MEG with either only the values or only the probabilities differing between the prospects; and 2) had similar MEG with one prospect having a larger probability of hitting the target and a higher penalty value and the other prospect a smaller probability of hitting the target but a lower penalty value. In different experiments, participants either had 400 ms or 2000 ms to decide between the prospects. It was found that participants chose the configuration with the larger MEG more often when the probability varied between prospects than when the value varied. In pairs with similar MEGs, participants preferred a larger probability of hitting the target over a smaller penalty value. These results indicate that participants prefer probability information over negative value information in a motor selection task.
Bayesian parameter estimation for chiral effective field theory
NASA Astrophysics Data System (ADS)
Wesolowski, Sarah; Furnstahl, Richard; Phillips, Daniel; Klco, Natalie
2016-09-01
The low-energy constants (LECs) of a chiral effective field theory (EFT) interaction in the two-body sector are fit to observable data using a Bayesian parameter estimation framework. By using Bayesian prior probability distributions (pdfs), we quantify relevant physical expectations such as LEC naturalness and include them in the parameter estimation procedure. The final result is a posterior pdf for the LECs, which can be used to propagate uncertainty resulting from the fit to data to the final observable predictions. The posterior pdf also allows an empirical test of operator redundancy and other features of the potential. We compare results of our framework with other fitting procedures, interpreting the underlying assumptions in Bayesian probabilistic language. We also compare results from fitting all partial waves of the interaction simultaneously to cross section data compared to fitting to extracted phase shifts, appropriately accounting for correlations in the data. Supported in part by the NSF and DOE.
Multiclass feature selection for improved pediatric brain tumor segmentation
NASA Astrophysics Data System (ADS)
Ahmed, Shaheen; Iftekharuddin, Khan M.
2012-03-01
In our previous work, we showed that fractal-based texture features are effective in detection, segmentation and classification of posterior-fossa (PF) pediatric brain tumor in multimodality MRI. We exploited an information theoretic approach such as Kullback-Leibler Divergence (KLD) for feature selection and ranking different texture features. We further incorporated the feature selection technique with segmentation method such as Expectation Maximization (EM) for segmentation of tumor T and non tumor (NT) tissues. In this work, we extend the two class KLD technique to multiclass for effectively selecting the best features for brain tumor (T), cyst (C) and non tumor (NT). We further obtain segmentation robustness for each tissue types by computing Bay's posterior probabilities and corresponding number of pixels for each tissue segments in MRI patient images. We evaluate improved tumor segmentation robustness using different similarity metric for 5 patients in T1, T2 and FLAIR modalities.
Jedidi, H; Daury, N; Capa, R; Bahri, M A; Collette, F; Feyers, D; Bastin, C; Maquet, P; Salmon, E
2015-11-01
Capgras delusion is characterized by the misidentification of people and by the delusional belief that the misidentified persons have been replaced by impostors, generally perceived as persecutors. Since little is known regarding the neural correlates of Capgras syndrome, the cerebral metabolic pattern of a patient with probable Alzheimer's disease (AD) and Capgras syndrome was compared with those of 24-healthy elderly participants and 26 patients with AD without delusional syndrome. Comparing the healthy group with the AD group, the patient with AD had significant hypometabolism in frontal and posterior midline structures. In the light of current neural models of face perception, our patients with Capgras syndrome may be related to impaired recognition of a familiar face, subserved by the posterior cingulate/precuneus cortex, and impaired reflection about personally relevant knowledge related to a face, subserved by the dorsomedial prefrontal cortex. © The Author(s) 2013.
Using Latent Class Analysis to Model Temperament Types.
Loken, Eric
2004-10-01
Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks was used for model selection. Results show at least three types of infant temperament, with patterns consistent with those identified by previous researchers who classified the infants using a theoretically based system. Multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation. Latent class membership at four months of age predicted longitudinal outcomes at four years of age. The example illustrates issues relevant to all mixture models, including estimation, multi-modality, model selection, and comparisons based on the latent group indicators.
Modeling Dynamic Contrast-Enhanced MRI Data with a Constrained Local AIF.
Duan, Chong; Kallehauge, Jesper F; Pérez-Torres, Carlos J; Bretthorst, G Larry; Beeman, Scott C; Tanderup, Kari; Ackerman, Joseph J H; Garbow, Joel R
2018-02-01
This study aims to develop a constrained local arterial input function (cL-AIF) to improve quantitative analysis of dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) data by accounting for the contrast-agent bolus amplitude error in the voxel-specific AIF. Bayesian probability theory-based parameter estimation and model selection were used to compare tracer kinetic modeling employing either the measured remote-AIF (R-AIF, i.e., the traditional approach) or an inferred cL-AIF against both in silico DCE-MRI data and clinical, cervical cancer DCE-MRI data. When the data model included the cL-AIF, tracer kinetic parameters were correctly estimated from in silico data under contrast-to-noise conditions typical of clinical DCE-MRI experiments. Considering the clinical cervical cancer data, Bayesian model selection was performed for all tumor voxels of the 16 patients (35,602 voxels in total). Among those voxels, a tracer kinetic model that employed the voxel-specific cL-AIF was preferred (i.e., had a higher posterior probability) in 80 % of the voxels compared to the direct use of a single R-AIF. Maps of spatial variation in voxel-specific AIF bolus amplitude and arrival time for heterogeneous tissues, such as cervical cancer, are accessible with the cL-AIF approach. The cL-AIF method, which estimates unique local-AIF amplitude and arrival time for each voxel within the tissue of interest, provides better modeling of DCE-MRI data than the use of a single, measured R-AIF. The Bayesian-based data analysis described herein affords estimates of uncertainties for each model parameter, via posterior probability density functions, and voxel-wise comparison across methods/models, via model selection in data modeling.
Distribution of Marburg virus in Africa: An evolutionary approach.
Zehender, Gianguglielmo; Sorrentino, Chiara; Veo, Carla; Fiaschi, Lisa; Gioffrè, Sonia; Ebranati, Erika; Tanzi, Elisabetta; Ciccozzi, Massimo; Lai, Alessia; Galli, Massimo
2016-10-01
The aim of this study was to investigate the origin and geographical dispersion of Marburg virus, the first member of the Filoviridae family to be discovered. Seventy-three complete genome sequences of Marburg virus isolated from animals and humans were retrieved from public databases and analysed using a Bayesian phylogeographical framework. The phylogenetic tree of the Marburg virus data set showed two significant evolutionary lineages: Ravn virus (RAVV) and Marburg virus (MARV). MARV divided into two main clades; clade A included isolates from Uganda (five from the European epidemic in 1967), Kenya (1980) and Angola (from the epidemic of 2004-2005); clade B included most of the isolates obtained during the 1999-2000 epidemic in the Democratic Republic of the Congo (DRC) and a group of Ugandan isolates obtained in 2007-2009. The estimated mean evolutionary rate of the whole genome was 3.3×10(-4) substitutions/site/year (credibility interval 2.0-4.8). The MARV strain had a mean root time of the most recent common ancestor of 177.9years ago (YA) (95% highest posterior density 87-284), thus indicating that it probably originated in the mid-XIX century, whereas the RAVV strain had a later origin dating back to a mean 33.8 YA. The most probable location of the MARV ancestor was Uganda (state posterior probability, spp=0.41), whereas that of the RAVV ancestor was Kenya (spp=0.71). There were significant migration rates from Uganda to the DRC (Bayes Factor, BF=42.0) and in the opposite direction (BF=5.7). Our data suggest that Uganda may have been the cradle of Marburg virus in Africa. Copyright © 2016 Elsevier B.V. All rights reserved.
The evolutionary history of vertebrate cranial placodes II. Evolution of ectodermal patterning.
Schlosser, Gerhard; Patthey, Cedric; Shimeld, Sebastian M
2014-05-01
Cranial placodes are evolutionary innovations of vertebrates. However, they most likely evolved by redeployment, rewiring and diversification of preexisting cell types and patterning mechanisms. In the second part of this review we compare vertebrates with other animal groups to elucidate the evolutionary history of ectodermal patterning. We show that several transcription factors have ancient bilaterian roles in dorsoventral and anteroposterior regionalisation of the ectoderm. Evidence from amphioxus suggests that ancestral chordates then concentrated neurosecretory cells in the anteriormost non-neural ectoderm. This anterior proto-placodal domain subsequently gave rise to the oral siphon primordia in tunicates (with neurosecretory cells being lost) and anterior (adenohypophyseal, olfactory, and lens) placodes of vertebrates. Likewise, tunicate atrial siphon primordia and posterior (otic, lateral line, and epibranchial) placodes of vertebrates probably evolved from a posterior proto-placodal region in the tunicate-vertebrate ancestor. Since both siphon primordia in tunicates give rise to sparse populations of sensory cells, both proto-placodal domains probably also gave rise to some sensory receptors in the tunicate-vertebrate ancestor. However, proper cranial placodes, which give rise to high density arrays of specialised sensory receptors and neurons, evolved from these domains only in the vertebrate lineage. We propose that this may have involved rewiring of the regulatory network upstream and downstream of Six1/2 and Six4/5 transcription factors and their Eya family cofactors. These proteins, which play ancient roles in neuronal differentiation were first recruited to the dorsal non-neural ectoderm in the tunicate-vertebrate ancestor but subsequently probably acquired new target genes in the vertebrate lineage, allowing them to adopt new functions in regulating proliferation and patterning of neuronal progenitors. Copyright © 2014 Elsevier Inc. All rights reserved.
Informatic parcellation of the network involved in the computation of subjective value
Rangel, Antonio
2014-01-01
Understanding how the brain computes value is a basic question in neuroscience. Although individual studies have driven this progress, meta-analyses provide an opportunity to test hypotheses that require large collections of data. We carry out a meta-analysis of a large set of functional magnetic resonance imaging studies of value computation to address several key questions. First, what is the full set of brain areas that reliably correlate with stimulus values when they need to be computed? Second, is this set of areas organized into dissociable functional networks? Third, is a distinct network of regions involved in the computation of stimulus values at decision and outcome? Finally, are different brain areas involved in the computation of stimulus values for different reward modalities? Our results demonstrate the centrality of ventromedial prefrontal cortex (VMPFC), ventral striatum and posterior cingulate cortex (PCC) in the computation of value across tasks, reward modalities and stages of the decision-making process. We also find evidence of distinct subnetworks of co-activation within VMPFC, one involving central VMPFC and dorsal PCC and another involving more anterior VMPFC, left angular gyrus and ventral PCC. Finally, we identify a posterior-to-anterior gradient of value representations corresponding to concrete-to-abstract rewards. PMID:23887811
Shah, P L; Slebos, D-J; Cardoso, P F G; Cetti, E; Voelker, K; Levine, B; Russell, M E; Goldin, J; Brown, M; Cooper, J D; Sybrecht, G W
2011-09-10
Airway bypass is a bronchoscopic lung-volume reduction procedure for emphysema whereby transbronchial passages into the lung are created to release trapped air, supported with paclitaxel-coated stents to ease the mechanics of breathing. The aim of the EASE (Exhale airway stents for emphysema) trial was to evaluate safety and efficacy of airway bypass in people with severe homogeneous emphysema. We undertook a randomised, double-blind, sham-controlled study in 38 specialist respiratory centres worldwide. We recruited 315 patients who had severe hyperinflation (ratio of residual volume [RV] to total lung capacity of ≥0·65). By computer using a random number generator, we randomly allocated participants (in a 2:1 ratio) to either airway bypass (n=208) or sham control (107). We divided investigators into team A (masked), who completed pre-procedure and post-procedure assessments, and team B (unmasked), who only did bronchoscopies without further interaction with patients. Participants were followed up for 12 months. The 6-month co-primary efficacy endpoint required 12% or greater improvement in forced vital capacity (FVC) and 1 point or greater decrease in the modified Medical Research Council dyspnoea score from baseline. The composite primary safety endpoint incorporated five severe adverse events. We did Bayesian analysis to show the posterior probability that airway bypass was superior to sham control (success threshold, 0·965). Analysis was by intention to treat. This study is registered with ClinicalTrials.gov, number NCT00391612. All recruited patients were included in the analysis. At 6 months, no difference between treatment arms was noted with respect to the co-primary efficacy endpoint (30 of 208 for airway bypass vs 12 of 107 for sham control; posterior probability 0·749, below the Bayesian success threshold of 0·965). The 6-month composite primary safety endpoint was 14·4% (30 of 208) for airway bypass versus 11·2% (12 of 107) for sham control (judged non-inferior, with a posterior probability of 1·00 [Bayesian success threshold >0·95]). Although our findings showed safety and transient improvements, no sustainable benefit was recorded with airway bypass in patients with severe homogeneous emphysema. Broncus Technologies. Copyright © 2011 Elsevier Ltd. All rights reserved.
Analysis of Spectral-type A/B Stars in Five Open Clusters
NASA Astrophysics Data System (ADS)
Wilhelm, Ronald J.; Rafuil Islam, M.
2014-01-01
We have obtained low resolution (R = 1000) spectroscopy of N=68, spectral-type A/B stars in five nearby open star clusters using the McDonald Observatory, 2.1m telescope. The sample of blue stars in various clusters were selected to test our new technique for determining interstellar reddening and distances in areas where interstellar reddening is high. We use a Bayesian approach to find the posterior distribution for Teff, Logg and [Fe/H] from a combination of reddened, photometric colors and spectroscopic line strengths. We will present calibration results for this technique using open cluster star data with known reddening and distances. Preliminary results suggest our technique can produce both reddening and distance determinations to within 10% of cluster values. Our technique opens the possibility of determining distances for blue stars at low Galactic latitudes where extinction can be large and differential. We will also compare our stellar parameter determinations to previously reported MK spectral classifications and discuss the probability that some of our stars are not members of their reported clusters.
Active contour-based visual tracking by integrating colors, shapes, and motions.
Hu, Weiming; Zhou, Xue; Li, Wei; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen
2013-05-01
In this paper, we present a framework for active contour-based visual tracking using level sets. The main components of our framework include contour-based tracking initialization, color-based contour evolution, adaptive shape-based contour evolution for non-periodic motions, dynamic shape-based contour evolution for periodic motions, and the handling of abrupt motions. For the initialization of contour-based tracking, we develop an optical flow-based algorithm for automatically initializing contours at the first frame. For the color-based contour evolution, Markov random field theory is used to measure correlations between values of neighboring pixels for posterior probability estimation. For adaptive shape-based contour evolution, the global shape information and the local color information are combined to hierarchically evolve the contour, and a flexible shape updating model is constructed. For the dynamic shape-based contour evolution, a shape mode transition matrix is learnt to characterize the temporal correlations of object shapes. For the handling of abrupt motions, particle swarm optimization is adopted to capture the global motion which is applied to the contour in the current frame to produce an initial contour in the next frame.
Worry about one's own children, psychological well-being, and interest in psychosocial intervention.
Stinesen-Kollberg, Karin; Thorsteinsdottir, Thordis; Wilderäng, Ulrica; Steineck, Gunnar
2013-09-01
This study investigated the association between worrying about own children and low psychological well-being during the year that follows breast cancer. In an observational population-based study, we collected data from 313 women operated for breast cancer at Sahlgrenska University Hospital in Gothenburg, Sweden. Worrying about one's own children (3-7 on a 1-7 visual digital scale) was, among other variables, significantly associated with low psychological well-being 1 year after breast cancer surgery (relative risk 2.63; 95% CI 1.77-3.90; posterior probability value 98.8%). In this group of women operated for breast cancer, we found an association between worrying about one's own children and low psychological well-being. In a healthcare system where resources are scarce, it becomes imperative to identify to whom resources should be directed. Therefore, we may consider prioritizing psychological interventions for mothers with younger children and develop effective means to communicate about issues related to the children to increase chances of an effective, successful rehabilitation. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
The, Matthew; MacCoss, Michael J.; Noble, William S.; Käll, Lukas
2016-11-01
Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method—grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein—in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license.
The, Matthew; MacCoss, Michael J; Noble, William S; Käll, Lukas
2016-11-01
Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method-grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein-in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license. Graphical Abstract ᅟ.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez, M.; Campion, D.; Babron, M.C.
1996-02-16
Segregation analysis of Alzheimer disease (AD) in 92 families ascertained through early-onset ({le}age 60 years) AD (EOAD) probands has been carried out, allowing for a mixture in AD inheritance among probands. The goal was to quantify the proportion of probands that could be explained by autosomal inheritance of a rare disease allele {open_quotes}a{close_quotes} at a Mendelian dominant gene (MDG). Our data provide strong evidence for a mixture of two distributions; AD transmission is fully explained by MDG inheritance in <20% of probands. Male and female age-of-onset distributions are significantly different for {open_quotes}AA{close_quote} but not for {open_quotes}aA{close_quote} subjects. For {open_quotes}aA{close_quote} subjectsmore » the estimated penetrance value was close to 1 by age 60. For {open_quotes}AA{close_quotes} subjects, it reaches, by age 90, 10% (males) and 30% (females). We show a clear cutoff in the posterior probability of being an MDG case. 10 refs., 1 tab.« less
Enhanced optical alignment of a digital micro mirror device through Bayesian adaptive exploration
NASA Astrophysics Data System (ADS)
Wynne, Kevin B.; Knuth, Kevin H.; Petruccelli, Jonathan
2017-12-01
As the use of Digital Micro Mirror Devices (DMDs) becomes more prevalent in optics research, the ability to precisely locate the Fourier "footprint" of an image beam at the Fourier plane becomes a pressing need. In this approach, Bayesian adaptive exploration techniques were employed to characterize the size and position of the beam on a DMD located at the Fourier plane. It couples a Bayesian inference engine with an inquiry engine to implement the search. The inquiry engine explores the DMD by engaging mirrors and recording light intensity values based on the maximization of the expected information gain. Using the data collected from this exploration, the Bayesian inference engine updates the posterior probability describing the beam's characteristics. The process is iterated until the beam is located to within the desired precision. This methodology not only locates the center and radius of the beam with remarkable precision but accomplishes the task in far less time than a brute force search. The employed approach has applications to system alignment for both Fourier processing and coded aperture design.
Dai, Can; Guo, Lin; Yang, Liu; Wu, Yi; Gou, Jingyue; Li, Bangchun
2015-02-01
We studied anterior cruciate ligament (ACL) tibial insertion architecture in humans and investigated regional differences that could suggest unequal force transmission from ligament to bone. ACL tibial insertions were processed histologically. With Photoshop software, digital images taken from the histological slides were collaged, contour lines were drawn, and different gray values were filled based on the structure. The data were exported to Amira software for three-dimensional reconstruction. The uncalcified fibrocartilage (UF) layer was divided into three regions: lateral, medial and posterior according to the architecture. The UF zone was significantly thicker laterally than medially or posteriorly (p < 0.05). Similarly, the calcified fibrocartilage (CF) thickness was significantly greater in the lateral part of the enthesis compared to the medial and posterior parts (p < 0.05). The UF quantity (more UF laterally) corresponding to the CF quantity (more CF laterally) at the ACL tibial insertion provides further evidence suggesting that the load transferred from the ACL to the tibia was greater laterally than medially and posteriorly.
Barros, Marcos Alexandre; Cervone, Gabriel Lopes de Faria; Costa, André Luis Serigatti
2015-01-01
Objective To objectively and subjectively evaluate the functional result from before to after surgery among patients with a diagnosis of an isolated avulsion fracture of the posterior cruciate ligament who were treated surgically. Method Five patients were evaluated by means of reviewing the medical files, applying the Lysholm questionnaire, physical examination and radiological examination. For the statistical analysis, a significance level of 0.10 and 95% confidence interval were used. Results According to the Lysholm criteria, all the patients were classified as poor (<64 points) before the operation and evolved to a mean of 96 points six months after the operation. We observed that 100% of the posterior drawer cases became negative, taking values less than 5 mm to be negative. Conclusion Surgical methods with stable fixation for treating avulsion fractures at the tibial insertion of the posterior cruciate ligament produce acceptable functional results from the surgical and radiological points of view, with a significance level of 0.042. PMID:27218073
Vanderwegen, Jan; Guns, Cindy; Van Nuffelen, Gwen; Elen, Rik; De Bodt, Marc
2013-06-01
This study collected data on the maximum anterior and posterior tongue strength and endurance in 420 healthy Belgians across the adult life span to explore the influence of age, sex, bulb position, visual feedback, and order of testing. Measures were obtained using the Iowa Oral Performance Instrument (IOPI). Older participants (more than 70 years old) demonstrated significantly lower strength than younger persons at the anterior and the posterior tongue. Endurance remains stable throughout the major part of life. Gender influence remains significant but minor throughout life, with males showing higher pressures and longer endurance. The anterior part of the tongue has both higher strength and longer endurance than the posterior part. Mean maximum tongue pressures in this European population seem to be lower than American values and are closer to Asian results. The normative data can be used for objective assessment of tongue weakness and subsequent therapy planning of dysphagic patients.
Rodriguez Gutierrez, D; Awwad, A; Meijer, L; Manita, M; Jaspan, T; Dineen, R A; Grundy, R G; Auer, D P
2014-05-01
Qualitative radiologic MR imaging review affords limited differentiation among types of pediatric posterior fossa brain tumors and cannot detect histologic or molecular subtypes, which could help to stratify treatment. This study aimed to improve current posterior fossa discrimination of histologic tumor type by using support vector machine classifiers on quantitative MR imaging features. This retrospective study included preoperative MRI in 40 children with posterior fossa tumors (17 medulloblastomas, 16 pilocytic astrocytomas, and 7 ependymomas). Shape, histogram, and textural features were computed from contrast-enhanced T2WI and T1WI and diffusivity (ADC) maps. Combinations of features were used to train tumor-type-specific classifiers for medulloblastoma, pilocytic astrocytoma, and ependymoma types in separation and as a joint posterior fossa classifier. A tumor-subtype classifier was also produced for classic medulloblastoma. The performance of different classifiers was assessed and compared by using randomly selected subsets of training and test data. ADC histogram features (25th and 75th percentiles and skewness) yielded the best classification of tumor type (on average >95.8% of medulloblastomas, >96.9% of pilocytic astrocytomas, and >94.3% of ependymomas by using 8 training samples). The resulting joint posterior fossa classifier correctly assigned >91.4% of the posterior fossa tumors. For subtype classification, 89.4% of classic medulloblastomas were correctly classified on the basis of ADC texture features extracted from the Gray-Level Co-Occurence Matrix. Support vector machine-based classifiers using ADC histogram features yielded very good discrimination among pediatric posterior fossa tumor types, and ADC textural features show promise for further subtype discrimination. These findings suggest an added diagnostic value of quantitative feature analysis of diffusion MR imaging in pediatric neuro-oncology. © 2014 by American Journal of Neuroradiology.
Pang, Xiaoyang; Wu, Ping; Shen, Xiongjie; Li, Dongzhe; Luo, Chenke; Wang, Xiyang
2013-08-01
Retrospective analysis of the clinical study efficacy and feasibility of one-stage posterior transforaminal lumbar debridement, 360° interbody fusion, and posterior instrumentation in treating lumbosacral spinal tuberculosis. A total of 21 patients with lumbosacral tuberculosis (TB) collected from January 2004 to January 2010, underwent one-stage posterior transforaminal lumbar debridement, 360° interbody fusion, and posterior instrumentation. In addition, the clinical efficacy was evaluated based on the data on the lumbo-sacral angle, neuro-logical status that was recorded by American Spinal Injury Association (ASIA) Impairment Scale, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP), which were collected at specific time points. All cases were followed up for 16-36 months (average 24.9 ± 6.44 months). 18 patients suffered from evident neurological deficits preoperatively, of which 16 patients returned to normal at the final follow-up. Two patients with neurological dysfunction aggravated postoperative, experienced significant partial neurological recovery. With an effective and standard anti-TB chemotherapy treated, the values of ESR and CRP returned to normal levels 3-month later postoperative and maintained till the final follow-up. Preoperative lumbosacral angle was 20.89 ± 2.32° and returned 29.62 ± 1.41° postoperative. During long-term follow-up, there was only 1-3° lumbosacral angle loss. There was a significant difference between preoperative and postoperative lumbosacral angles. With effective and standard anti-TB chemotherapy, one-stage posterior transforaminal lumbar debridement, 360° interbody fusion, and posterior instrumentation for lumbosacral tuberculosis can effectively relieve pain symptoms, improve neurological function, and reconstruct the spinal stability.
Objective estimates based on experimental data and initial and final knowledge
NASA Technical Reports Server (NTRS)
Rosenbaum, B. M.
1972-01-01
An extension of the method of Jaynes, whereby least biased probability estimates are obtained, permits such estimates to be made which account for experimental data on hand as well as prior and posterior knowledge. These estimates can be made for both discrete and continuous sample spaces. The method allows a simple interpretation of Laplace's two rules: the principle of insufficient reason and the rule of succession. Several examples are analyzed by way of illustration.
[Convergence nystagmus and vertical gaze palsy of vascular origin].
Jouvent, E; Benisty, S; Fenelon, G; Créange, A; Pierrot-Deseilligny, C
2005-05-01
A case of convergence-retraction nystagmus with upward vertical gaze paralysis and skew deviation (right hypotropia), without any other neurological signs, is reported. The probably vascular lesion was located at the mesodiencephalic junction, lying between the right border of the posterior commissure, the right interstitial nucleus of Cajal and the periaqueductal grey matter, accounting for the three ocular motor signs. The particular interest of this case is due to the relative smallness of the lesion.
Unified Description of Scattering and Propagation FY15 Annual Report
2015-09-30
the Texas coast. For both cases a conditional posterior probability distribution ( PPD ) is formed for a parameter space that includes both geoacoustic...for this second application of ME. For each application of ME it is important to note that a new likelihood function and thus PPD is computed. One...the 50-700 Hz band. These data offered a means by which the results of using the ship radiated noise could be partially validated. The conditional PPD
Levy, Jonathan I.; Diez, David; Dou, Yiping; Barr, Christopher D.; Dominici, Francesca
2012-01-01
Health risk assessments of particulate matter less than 2.5 μm in diameter (PM2.5) often assume that all constituents of PM2.5 are equally toxic. While investigators in previous epidemiologic studies have evaluated health risks from various PM2.5 constituents, few have conducted the analyses needed to directly inform risk assessments. In this study, the authors performed a literature review and conducted a multisite time-series analysis of hospital admissions and exposure to PM2.5 constituents (elemental carbon, organic carbon matter, sulfate, and nitrate) in a population of 12 million US Medicare enrollees for the period 2000–2008. The literature review illustrated a general lack of multiconstituent models or insight about probabilities of differential impacts per unit of concentration change. Consistent with previous results, the multisite time-series analysis found statistically significant associations between short-term changes in elemental carbon and cardiovascular hospital admissions. Posterior probabilities from multiconstituent models provided evidence that some individual constituents were more toxic than others, and posterior parameter estimates coupled with correlations among these estimates provided necessary information for risk assessment. Ratios of constituent toxicities, commonly used in risk assessment to describe differential toxicity, were extremely uncertain for all comparisons. These analyses emphasize the subtlety of the statistical techniques and epidemiologic studies necessary to inform risk assessments of particle constituents. PMID:22510275
Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence
NASA Astrophysics Data System (ADS)
Lewis, Nicholas; Grünwald, Peter
2018-03-01
Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).
NASA Astrophysics Data System (ADS)
Hanish Nithin, Anu; Omenzetter, Piotr
2017-04-01
Optimization of the life-cycle costs and reliability of offshore wind turbines (OWTs) is an area of immense interest due to the widespread increase in wind power generation across the world. Most of the existing studies have used structural reliability and the Bayesian pre-posterior analysis for optimization. This paper proposes an extension to the previous approaches in a framework for probabilistic optimization of the total life-cycle costs and reliability of OWTs by combining the elements of structural reliability/risk analysis (SRA), the Bayesian pre-posterior analysis with optimization through a genetic algorithm (GA). The SRA techniques are adopted to compute the probabilities of damage occurrence and failure associated with the deterioration model. The probabilities are used in the decision tree and are updated using the Bayesian analysis. The output of this framework would determine the optimal structural health monitoring and maintenance schedules to be implemented during the life span of OWTs while maintaining a trade-off between the life-cycle costs and risk of the structural failure. Numerical illustrations with a generic deterioration model for one monitoring exercise in the life cycle of a system are demonstrated. Two case scenarios, namely to build initially an expensive and robust or a cheaper but more quickly deteriorating structures and to adopt expensive monitoring system, are presented to aid in the decision-making process.
Browning, Brian L.; Browning, Sharon R.
2009-01-01
We present methods for imputing data for ungenotyped markers and for inferring haplotype phase in large data sets of unrelated individuals and parent-offspring trios. Our methods make use of known haplotype phase when it is available, and our methods are computationally efficient so that the full information in large reference panels with thousands of individuals is utilized. We demonstrate that substantial gains in imputation accuracy accrue with increasingly large reference panel sizes, particularly when imputing low-frequency variants, and that unphased reference panels can provide highly accurate genotype imputation. We place our methodology in a unified framework that enables the simultaneous use of unphased and phased data from trios and unrelated individuals in a single analysis. For unrelated individuals, our imputation methods produce well-calibrated posterior genotype probabilities and highly accurate allele-frequency estimates. For trios, our haplotype-inference method is four orders of magnitude faster than the gold-standard PHASE program and has excellent accuracy. Our methods enable genotype imputation to be performed with unphased trio or unrelated reference panels, thus accounting for haplotype-phase uncertainty in the reference panel. We present a useful measure of imputation accuracy, allelic R2, and show that this measure can be estimated accurately from posterior genotype probabilities. Our methods are implemented in version 3.0 of the BEAGLE software package. PMID:19200528
Sugar, Elizabeth A; Holbrook, Janet T; Kempen, John H; Burke, Alyce E; Drye, Lea T; Thorne, Jennifer E; Louis, Thomas A; Jabs, Douglas A; Altaweel, Michael M; Frick, Kevin D
2014-10-01
To evaluate the 3-year incremental cost-effectiveness of fluocinolone acetonide implant versus systemic therapy for the treatment of noninfectious intermediate, posterior, and panuveitis. Randomized, controlled, clinical trial. Patients with active or recently active intermediate, posterior, or panuveitis enrolled in the Multicenter Uveitis Steroid Treatment Trial. Data on cost and health utility during 3 years after randomization were evaluated at 6-month intervals. Analyses were stratified by disease laterality at randomization (31 unilateral vs 224 bilateral) because of the large upfront cost of the implant. The primary outcome was the incremental cost-effectiveness ratio (ICER) over 3 years: the ratio of the difference in cost (in United States dollars) to the difference in quality-adjusted life-years (QALYs). Costs of medications, surgeries, hospitalizations, and regular procedures (e.g., laboratory monitoring for systemic therapy) were included. We computed QALYs as a weighted average of EQ-5D scores over 3 years of follow-up. The ICER at 3 years was $297,800/QALY for bilateral disease, driven by the high cost of implant therapy (difference implant - systemic [Δ]: $16,900; P < 0.001) and the modest gains in QALYs (Δ = 0.057; P = 0.22). The probability of the ICER being cost-effective at thresholds of $50,000/QALY and $100,000/QALY was 0.003 and 0.04, respectively. The ICER for unilateral disease was more favorable, namely, $41,200/QALY at 3 years, because of a smaller difference in cost between the 2 therapies (Δ = $5300; P = 0.44) and a larger benefit in QALYs with the implant (Δ = 0.130; P = 0.12). The probability of the ICER being cost-effective at thresholds of $50,000/QALY and $100,000/QALY was 0.53 and 0.74, respectively. Fluocinolone acetonide implant therapy was reasonably cost-effective compared with systemic therapy for individuals with unilateral intermediate, posterior, or panuveitis but not for those with bilateral disease. These results do not apply to the use of implant therapy when systemic therapy has failed or is contraindicated. Should the duration of implant effect prove to be substantially >3 years or should large changes in therapy pricing occur, the cost-effectiveness of implant versus systemic therapy would need to be reevaluated. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-04-01
Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.
A mesostate-space model for EEG and MEG.
Daunizeau, Jean; Friston, Karl J
2007-10-15
We present a multi-scale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of meso-sources specifies the model, the model evidence can be used to compare models and find the optimum number of meso-sources. In addition to estimating the dynamics at each cortical dipole, the mesostate-space model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of meso-sources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of meso-sources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamical causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the added-value of the mesostate-space model and its variational inversion.
A tale of two modes: neutrino free-streaming in the early universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lancaster, Lachlan; Cyr-Racine, Francis-Yan; Knox, Lloyd
2017-07-01
We present updated constraints on the free-streaming nature of cosmological neutrinos from cosmic microwave background (CMB) temperature and polarization power spectra, baryonic acoustic oscillation data, and distance ladder measurements of the Hubble constant. Specifically, we consider a Fermi-like four-fermion interaction between massless neutrinos, characterized by an effective coupling constant G {sub eff}, and resulting in a neutrino opacity τ-dot {sub ν∝} G {sub eff}{sup 2} T {sub ν}{sup 5}. Using a conservative flat prior on the parameter log{sub 10}( G {sub eff} MeV{sup 2}), we find a bimodal posterior distribution with two clearly separated regions of high probability. The firstmore » of these modes is consistent with the standard ΛCDM cosmology and corresponds to neutrinos decoupling at redshift z {sub ν,dec} > 1.3×10{sup 5}, that is before the Fourier modes probed by the CMB damping tail enter the causal horizon. The other mode of the posterior, dubbed the 'interacting neutrino mode', corresponds to neutrino decoupling occurring within a narrow redshift window centered around z {sub ν,dec}∼8300. This mode is characterized by a high value of the effective neutrino coupling constant, log{sub 10}( G {sub eff} MeV{sup 2}) = −1.72 ± 0.10 (68% C.L.), together with a lower value of the scalar spectral index and amplitude of fluctuations, and a higher value of the Hubble parameter. Using both a maximum likelihood analysis and the ratio of the two mode's Bayesian evidence, we find the interacting neutrino mode to be statistically disfavored compared to the standard ΛCDM cosmology, and determine this result to be largely driven by the low- l CMB temperature data. Interestingly, the addition of CMB polarization and direct Hubble constant measurements significantly raises the statistical significance of this secondary mode, indicating that new physics in the neutrino sector could help explain the difference between local measurements of H {sub 0}, and those inferred from CMB data. A robust consequence of our results is that neutrinos must be free streaming long before the epoch of matter-radiation equality in order to fit current cosmological data.« less
Toward accurate and precise estimates of lion density.
Elliot, Nicholas B; Gopalaswamy, Arjun M
2017-08-01
Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and policy decisions. © 2016 Society for Conservation Biology.
Posterior corneal astigmatism in refractive lens exchange surgery.
Rydström, Elin; Westin, Oscar; Koskela, Timo; Behndig, Anders
2016-05-01
To assess the anterior, posterior and total corneal spherical and astigmatic powers in patients undergoing refractive lens exchange (RLE) surgery. In 402 consecutive patients planned for RLE at Koskelas Eye Clinic, Luleå, Sweden, right eye data from pre- and postoperative subjective refraction, preoperative IOLMaster(®) biometry and Pentacam HR(®) measurements were collected. Postoperative Pentacam HR(®) data were collected for 54 of the patients. The spherical and astigmatic powers of the anterior and posterior corneal surfaces and for the total cornea were assessed and compared, and surgically, induced astigmatism was calculated using vector analysis. The spherical power of the anterior corneal surface was 48.18 ± 1.69D with an astigmatic power of 0.83 ± 0.54D. The corresponding values for the posterior surface were -6.05 ± 2,52D and 0.26 ± 0.15D, respectively. The total corneal spherical power calculated with ray tracing was 42.47 ± 2.89D with a 0.72 ± 0.48D astigmatic power, and the corresponding figures obtained by estimating the posterior corneal surface were 43.25 ± 1.51D (p < 0.001) with a 0.75 ± 0.49D astigmatic power (p = 0.003). In eyes with anterior astigmatism with-the-rule, the total corneal astigmatism is overestimated if the posterior corneal surface is estimated; in eyes, with against-the-rule astigmatism it is underestimated. Had the posterior corneal surface been measured in this material, 14.7% of the patients would have received a spheric instead of a toric IOL, or vice versa. Estimating the posterior corneal surface in RLE patients leads to systematic measurement errors that can be reduced by measuring the posterior surface. Such an approach can potentially increase the refractive outcome accuracy in RLE surgery. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Mirouse, Guillaume; Nourissat, Geoffroy
2016-02-01
Open approach to the posterior shoulder during bone block for posterior shoulder instability is challenging. Anatomical study was performed to identify landmarks of a portal, avoiding soft tissue damage, between the infraspinatus (IS) and teres minor (TM) muscles and distant from the supra-scapular nerve (SSN) for arthroscopic shoulder bone block. Eight fresh-frozen cadaveric shoulder specimens were used. The arthroscope was introduced through the soft point (SP). A guide wire was placed through the SP, in the rotator interval direction. A posterior open dissection exposed the split between the IS and TM. A new guide wire was placed into the split, parallel to the first wire, to locate the new posterior arthroscopic approach. Ten distances were measured to define the safe position. The mean values were: SP to split IS-TM: 2 ± 0.2 (2-2.8); spinal bone to split IS-TM: 5 ± 0.5 (3-6.2); split IS-TM to posterior glenoid 6 o'clock: 1.3 ± 0.3 (0.6-1.6), 9 o'clock: 1.5 ± 0.3 (1-1.9), and 12 o'clock: 2 ± 0.1 (2.1-2.4); SSN to posterior glenoid 6 o'clock: 2.4 ± 0.2 (2.1-2.6), 9 o'clock: 1.7 ± 0.1 (1.5-1.8), and 12 o'clock: 1.5 ± 0.3 (1.2-2.1); and SSN to split IS-TM: 2 ± 0.3 (1.2-2.1). This preliminary anatomical study described a posterior arthroscopic portal located 2 cm under the SP, parallel to the SP portal direction, and finishing between 7 and 8 o'clock at the posterior rim of the glenoid. For arthroscopic shoulder bone block, this portal can avoid muscle and SSN lesions.
Evaluating the Value of Information in the Presence of High Uncertainty
2013-06-01
in this hierarchy is subsumed in the Knowledge and Information layers. If information with high expected value is identified, it then passes up...be, the higher is its value. Based on the idea of expected utility of asking a question [36], Nelson [31] discusses different approaches for...18] formalizes the expected value of a sample of information using the concept of pre-posterior analysis as the expected increase in utility by
Kocbek, Lidija; Rakuša, Mateja
2018-01-01
The right bronchial artery usually arises from the descending thoracic aorta as a common trunk with the right intercostal artery and forms the right intercostobronchial trunk. Both, the third right posterior intercostal artery and the right intercostobronchial trunk, are described as the most constant vessels. The focus of the study was to determine the characteristics of the right intercostobronchial trunk regarding the origins of the posterior intercostal arteries from the thoracic aorta. Posterior intercostal arteries and the right bronchial arteries were dissected in 43 human cadavers, preserved after Thiel's embalming method with intraarterial infusion of red colored latex. Postmortem examination gave valued information on the right intercostobronchial trunk present in 58% of cases. The right intercostobronchial trunk was mapped and new classification regarding the origin of the posterior intercostal arteries from the thoracic aorta suggested type A, B and C, the latter ones into subtypes 1 and 2. Type A was proportional to the origin level of the PIA and its corresponding intercostal space. Size of outer diameter at the origin did not indicate the right bronchial artery branch. In subtype 2 of type B the proximal posterior intercostal artery diameter that gave off right bronchial artery was thicker than distal one. The right bronchial artery originates from the second to the fifth posterior intercostal artery forming the right intercostobronchial trunk. Various origin and types of origin, diameter and course of the right intercostobronchial trunk described and analyzed in the study offer valuable information on the procedures involving the right intercostobronchial trunk.
Position of the prosthesis and the incidence of dislocation following total hip replacement.
He, Rong-xin; Yan, Shi-gui; Wu, Li-dong; Wang, Xiang-hua; Dai, Xue-song
2007-07-05
Dislocation is the second most common complication of hip replacement surgery, and impact of the prosthesis is believed to be the fundamental reason. The present study employed Solidworks 2003 and MSC-Nastran software to analyze the three dimensional variables in order to investigate how to prevent dislocation following hip replacement surgery. Computed tomography (CT) imaging was used to collect femoral outline data and Solidworks 2003 software was used to construct the cup model with variabilities. Nastran software was used to evaluate dislocation at different prosthesis positions and different geometrical shapes. Three dimensional movement and results from finite element method were analyzed and the values of dislocation resistance index (DRI), range of motion to impingement (ROM-I), range of motion to dislocation (ROM-D) and peak resisting moment (PRM) were determined. Computer simulation was used to evaluate the range of motion of the hip joint at different prosthesis positions. Finite element analysis showed: (1) Increasing the ratio of head/neck increased the ROM-I values and moderately increased ROM-D and PRM values. Increasing the head size significantly increased PRM and to some extent ROM-I and ROM-D values, which suggested that there would be a greater likelihood of dislocation. (2) Increasing the anteversion angle increased the ROM-I, ROM-D, PRM, energy required for dislocation (ENERGY-D) and DRI values, which would increase the stability of the joint. (3) As the chamber angle was increased, ROM-I, ROM-D, PRM, Energy-D and DRI values were increased, resulting in improved joint stability. Chamber angles exceeding 55 degrees resulted in increases in ROM-I and ROM-D values, but decreases in PRM, Energy-D, and DRI values, which, in turn, increased the likelihood of dislocation. (4) The cup, which was reduced posteriorly, reduced ROM-I values (2.1 -- 5.3 degrees ) and increased the DRI value (0.073). This suggested that the posterior high side had the effect of 10 degrees anteversion angle. Increasing the head/neck ratio increases joint stability. Posterior high side reduced the range of motion of the joint but increased joint stability; Increasing the anteversion angle increases DRI values and thus improve joint stability; Increasing the chamber angle increases DRI values and improves joint stability. However, at angles exceeding 55 degrees , further increases in the chamber angle result in decreased DRI values and reduce the stability of the joint.
Mean Field Variational Bayesian Data Assimilation
NASA Astrophysics Data System (ADS)
Vrettas, M.; Cornford, D.; Opper, M.
2012-04-01
Current data assimilation schemes propose a range of approximate solutions to the classical data assimilation problem, particularly state estimation. Broadly there are three main active research areas: ensemble Kalman filter methods which rely on statistical linearization of the model evolution equations, particle filters which provide a discrete point representation of the posterior filtering or smoothing distribution and 4DVAR methods which seek the most likely posterior smoothing solution. In this paper we present a recent extension to our variational Bayesian algorithm which seeks the most probably posterior distribution over the states, within the family of non-stationary Gaussian processes. Our original work on variational Bayesian approaches to data assimilation sought the best approximating time varying Gaussian process to the posterior smoothing distribution for stochastic dynamical systems. This approach was based on minimising the Kullback-Leibler divergence between the true posterior over paths, and our Gaussian process approximation. So long as the observation density was sufficiently high to bring the posterior smoothing density close to Gaussian the algorithm proved very effective, on lower dimensional systems. However for higher dimensional systems, the algorithm was computationally very demanding. We have been developing a mean field version of the algorithm which treats the state variables at a given time as being independent in the posterior approximation, but still accounts for their relationships between each other in the mean solution arising from the original dynamical system. In this work we present the new mean field variational Bayesian approach, illustrating its performance on a range of classical data assimilation problems. We discuss the potential and limitations of the new approach. We emphasise that the variational Bayesian approach we adopt, in contrast to other variational approaches, provides a bound on the marginal likelihood of the observations given parameters in the model which also allows inference of parameters such as observation errors, and parameters in the model and model error representation, particularly if this is written as a deterministic form with small additive noise. We stress that our approach can address very long time window and weak constraint settings. However like traditional variational approaches our Bayesian variational method has the benefit of being posed as an optimisation problem. We finish with a sketch of the future directions for our approach.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Effective Online Bayesian Phylogenetics via Sequential Monte Carlo with Guided Proposals
Fourment, Mathieu; Claywell, Brian C; Dinh, Vu; McCoy, Connor; Matsen IV, Frederick A; Darling, Aaron E
2018-01-01
Abstract Modern infectious disease outbreak surveillance produces continuous streams of sequence data which require phylogenetic analysis as data arrives. Current software packages for Bayesian phylogenetic inference are unable to quickly incorporate new sequences as they become available, making them less useful for dynamically unfolding evolutionary stories. This limitation can be addressed by applying a class of Bayesian statistical inference algorithms called sequential Monte Carlo (SMC) to conduct online inference, wherein new data can be continuously incorporated to update the estimate of the posterior probability distribution. In this article, we describe and evaluate several different online phylogenetic sequential Monte Carlo (OPSMC) algorithms. We show that proposing new phylogenies with a density similar to the Bayesian prior suffers from poor performance, and we develop “guided” proposals that better match the proposal density to the posterior. Furthermore, we show that the simplest guided proposals can exhibit pathological behavior in some situations, leading to poor results, and that the situation can be resolved by heating the proposal density. The results demonstrate that relative to the widely used MCMC-based algorithm implemented in MrBayes, the total time required to compute a series of phylogenetic posteriors as sequences arrive can be significantly reduced by the use of OPSMC, without incurring a significant loss in accuracy. PMID:29186587
Strauß, Johannes; Riesterer, Anja S; Lakes-Harlan, Reinhard
2016-01-01
The subgenual organ and associated scolopidial organs are well studied in Orthoptera and related taxa. In some insects, a small accessory organ or Nebenorgan is described posterior to the subgenual organ. In Tettigoniidae (Ensifera), the accessory organ has only been noted in one species though tibial sensory organs are well studied for neuroanatomy and physiology. Here, we use axonal tracing to analyse the posterior subgenual organ innervated by the main motor nerve. Investigating seven species from different groups of Tettigoniidae, we describe a small group of scolopidial sensilla (5-9 sensory neurons) which has features characteristic of the accessory organ: posterior tibial position, innervation by the main leg nerve rather than by the tympanal nerve, orientation of dendrites in proximal or ventro-proximal direction in the leg, and commonly association with a single campaniform sensillum. The neuroanatomy is highly similar between leg pairs. We show differences in the innervation in two species of the genus Poecilimon as compared to the other species. In Poecilimon, the sensilla of the accessory organ are innervated by one nerve branch together with the subgenual organ. The results suggest that the accessory organ is part of the sensory bauplan in the leg of Tettigoniidae and probably Ensifera. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Potential fields on the ventricular surface of the exposed dog heart during normal excitation.
Arisi, G; Macchi, E; Baruffi, S; Spaggiari, S; Taccardi, B
1983-06-01
We studied the normal spread of excitation on the anterior and posterior ventricular surface of open-chest dogs by recording unipolar electrograms from an array of 1124 electrodes spaced 2 mm apart. The array had the shape of the ventricular surface of the heart. The electrograms were processed by a computer and displayed as epicardial equipotential maps at 1-msec intervals. Isochrone maps also were drawn. Several new features of epicardial potential fields were identified: (1) a high number of breakthrough points; (2) the topography, apparent widths, velocities of the wavefronts and the related potential drop; (3) the topography of positive potential peaks in relation to the wavefronts. Fifteen to 24 breakthrough points were located on the anterior, and 10 to 13 on the posterior ventricular surface. Some were in previously described locations and many others in new locations. Specifically, 3 to 5 breakthrough points appeared close to the atrioventricular groove on the anterior right ventricle and 2 to 4 on the posterior heart aspect; these basal breakthrough points appeared when a large portion of ventricular surface was still unexcited. Due to the presence of numerous breakthrough points on the anterior and posterior aspect of the heart which had not previously been described, the spread of excitation on the ventricular surface was "mosaic-like," with activation wavefronts spreading in all directions, rather than radially from the two breakthrough points, as traditionally described. The positive potential peaks which lay ahead of the expanding wavefronts moved along preferential directions which were probably related to the myocardial fiber direction.
Posterior dental size reduction in hominids: the Atapuerca evidence.
Bermúdez de Castro, J M; Nicolas, M E
1995-04-01
In order to reassess previous hypotheses concerning dental size reduction of the posterior teeth during Pleistocene human evolution, current fossil dental evidence is examined. This evidence includes the large sample of hominid teeth found in recent excavations (1984-1993) in the Sima de los Huesos Middle Pleistocene cave site of the Sierra de Atapuerca (Burgos, Spain). The lower fourth premolars and molars of the Atapuerca hominids, probably older than 300 Kyr, have dimensions similar to those of modern humans. Further, these hominids share the derived state of other features of the posterior teeth with modern humans, such as a similar relative molar size and frequent absence of the hypoconulid, thus suggesting a possible case of parallelism. We believe that dietary changes allowed size reduction of the posterior teeth during the Middle Pleistocene, and the present evidence suggests that the selective pressures that operated on the size variability of these teeth were less restrictive than what is assumed by previous models of dental reduction. Thus, the causal relationship between tooth size decrease and changes in food-preparation techniques during the Pleistocene should be reconsidered. Moreover, the present evidence indicates that the differential reduction of the molars cannot be explained in terms of restriction of available growth space. The molar crown area measurements of a modern human sample were also investigated. The results of this study, as well as previous similar analyses, suggest that a decrease of the rate of cell proliferation, which affected the later-forming crown regions to a greater extent, may be the biological process responsible for the general and differential dental size reduction that occurred during human evolution.
The Cramér-Rao Bounds and Sensor Selection for Nonlinear Systems with Uncertain Observations.
Wang, Zhiguo; Shen, Xiaojing; Wang, Ping; Zhu, Yunmin
2018-04-05
This paper considers the problems of the posterior Cramér-Rao bound and sensor selection for multi-sensor nonlinear systems with uncertain observations. In order to effectively overcome the difficulties caused by uncertainty, we investigate two methods to derive the posterior Cramér-Rao bound. The first method is based on the recursive formula of the Cramér-Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state. The computation burden of this method is relatively high, especially in large sensor networks. Inspired by the idea of the expectation maximization algorithm, the second method is to introduce some 0-1 latent variables to deal with the Gaussian mixture model. Since the regular condition of the posterior Cramér-Rao bound is unsatisfied for the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables. Then, a new Cramér-Rao bound can be achieved by a limiting process of the Cramér-Rao bound of the continuous system. It avoids the complex integral, which can reduce the computation burden. Based on the new posterior Cramér-Rao bound, the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection of a large-scale sensor networks. Two typical numerical examples verify the effectiveness of the proposed methods.
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Tingley, M.
2015-12-01
Tide gauge records of mean sea level are some of the most valuable instrumental time series of oceanic variability and change. Yet these time series sometimes have short record lengths and intermittently missing values. Such issues can limit the utility of the data, for example, precluding rigorous analyses of return periods of extreme mean sea level events and whether they are unprecedented. With a view to filling gaps in the tide gauge mean sea level time series, we describe a hierarchical Bayesian modeling approach. The model, which is predicated on the notion of conditional probabilities, comprises three levels: a process level, which casts mean sea level as a field with spatiotemporal covariance; a data level, which represents tide gauge observations as noisy, biased versions of the true process; and a prior level, which gives prior functional forms to model parameters. Using Bayes' rule, this technique gives estimates of the posterior probability of the process and the parameters given the observations. To demonstrate the approach, we apply it to 2,967 station-years of annual mean sea level observations over 1856-2013 from 70 tide gauges along the United States East Coast from Florida to Maine (i.e., 26.8% record completeness). The model overcomes the data paucity by sharing information across space and time. The result is an ensemble of realizations, each member of which is a possible history of sea level changes at these locations over this period, which is consistent with and equally likely given the tide gauge data and underlying model assumptions. Using the ensemble of histories furnished by the Bayesian model, we identify extreme events of mean sea level change in the tide gauge time series. Specifically, we use the model to address the particular hypothesis (with rigorous uncertainty quantification) that a recently reported interannual sea level rise during 2008-2010 was unprecedented in the instrumental record along the northeast coast of North America, and that it had a return period of 850 years. Preliminary analysis suggests that this event was likely unprecedented on the coast of Maine in the last century.
Mehdi, Syed K; Alentado, Vincent J; Lee, Bryan S; Mroz, Thomas E; Benzel, Edward C; Steinmetz, Michael P
2016-06-01
OBJECTIVE Ossification of the posterior longitudinal ligament (OPLL) is a pathological calcification or ossification of the PLL, predominantly occurring in the cervical spine. Although surgery is often necessary for patients with symptomatic neurological deterioration, there remains controversy with regard to the optimal surgical treatment. In this systematic review and meta-analysis, the authors identified differences in complications and outcomes after anterior or posterior decompression and fusion versus after decompression alone for the treatment of cervical myelopathy due to OPLL. METHODS A MEDLINE, SCOPUS, and Web of Science search was performed for studies reporting complications and outcomes after decompression and fusion or after decompression alone for patients with OPLL. A meta-analysis was performed to calculate effect summary mean values, 95% CIs, Q statistics, and I(2) values. Forest plots were constructed for each analysis group. RESULTS Of the 2630 retrieved articles, 32 met the inclusion criteria. There was no statistically significant difference in the incidence of excellent and good outcomes and of fair and poor outcomes between the decompression and fusion and the decompression-only cohorts. However, the decompression and fusion cohort had a statistically significantly higher recovery rate (63.2% vs 53.9%; p < 0.0001), a higher final Japanese Orthopaedic Association score (14.0 vs 13.5; p < 0.0001), and a lower incidence of OPLL progression (< 1% vs 6.3%; p < 0.0001) compared with the decompression-only cohort. There was no statistically significant difference in the incidence of complications between the 2 cohorts. CONCLUSIONS This study represents the only comprehensive review of outcomes and complications after decompression and fusion or after decompression alone for OPLL across a heterogeneous group of surgeons and patients. Based on these results, decompression and fusion is a superior surgical technique compared with posterior decompression alone in patients with OPLL. These results indicate that surgical decompression and fusion lead to a faster recovery, improved postoperative neurological functioning, and a lower incidence of OPLL progression compared with posterior decompression only. Furthermore, decompression and fusion did not lead to a greater incidence of complications compared with posterior decompression only.
Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.
Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash
2014-03-01
One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. Copyright © 2013 Elsevier Ltd. All rights reserved.
Improving the Fit of a Land-Surface Model to Data Using its Adjoint
NASA Astrophysics Data System (ADS)
Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine
2016-04-01
Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.
Mechanisms of Neurofeedback: A Computation-theoretic Approach.
Davelaar, Eddy J
2018-05-15
Neurofeedback training is a form of brain training in which information about a neural measure is fed back to the trainee who is instructed to increase or decrease the value of that particular measure. This paper focuses on electroencephalography (EEG) neurofeedback in which the neural measures of interest are the brain oscillations. To date, the neural mechanisms that underlie successful neurofeedback training are still unexplained. Such an understanding would benefit researchers, funding agencies, clinicians, regulatory bodies, and insurance firms. Based on recent empirical work, an emerging theory couched firmly within computational neuroscience is proposed that advocates a critical role of the striatum in modulating EEG frequencies. The theory is implemented as a computer simulation of peak alpha upregulation, but in principle any frequency band at one or more electrode sites could be addressed. The simulation successfully learns to increase its peak alpha frequency and demonstrates the influence of threshold setting - the threshold that determines whether positive or negative feedback is provided. Analyses of the model suggest that neurofeedback can be likened to a search process that uses importance sampling to estimate the posterior probability distribution over striatal representational space, with each representation being associated with a distribution of values of the target EEG band. The model provides an important proof of concept to address pertinent methodological questions about how to understand and improve EEG neurofeedback success. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Joint search and sensor management for geosynchronous satellites
NASA Astrophysics Data System (ADS)
Zatezalo, A.; El-Fallah, A.; Mahler, R.; Mehra, R. K.; Pham, K.
2008-04-01
Joint search and sensor management for space situational awareness presents daunting scientific and practical challenges as it requires a simultaneous search for new, and the catalog update of the current space objects. We demonstrate a new approach to joint search and sensor management by utilizing the Posterior Expected Number of Targets (PENT) as the objective function, an observation model for a space-based EO/IR sensor, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker. Simulation and results using actual Geosynchronous Satellites are presented.
NASA Technical Reports Server (NTRS)
Freilich, M. H.; Pawka, S. S.
1987-01-01
The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.
Dynamic variable selection in SNP genotype autocalling from APEX microarray data.
Podder, Mohua; Welch, William J; Zamar, Ruben H; Tebbutt, Scott J
2006-11-30
Single nucleotide polymorphisms (SNPs) are DNA sequence variations, occurring when a single nucleotide--adenine (A), thymine (T), cytosine (C) or guanine (G)--is altered. Arguably, SNPs account for more than 90% of human genetic variation. Our laboratory has developed a highly redundant SNP genotyping assay consisting of multiple probes with signals from multiple channels for a single SNP, based on arrayed primer extension (APEX). This mini-sequencing method is a powerful combination of a highly parallel microarray with distinctive Sanger-based dideoxy terminator sequencing chemistry. Using this microarray platform, our current genotype calling system (known as SNP Chart) is capable of calling single SNP genotypes by manual inspection of the APEX data, which is time-consuming and exposed to user subjectivity bias. Using a set of 32 Coriell DNA samples plus three negative PCR controls as a training data set, we have developed a fully-automated genotyping algorithm based on simple linear discriminant analysis (LDA) using dynamic variable selection. The algorithm combines separate analyses based on the multiple probe sets to give a final posterior probability for each candidate genotype. We have tested our algorithm on a completely independent data set of 270 DNA samples, with validated genotypes, from patients admitted to the intensive care unit (ICU) of St. Paul's Hospital (plus one negative PCR control sample). Our method achieves a concordance rate of 98.9% with a 99.6% call rate for a set of 96 SNPs. By adjusting the threshold value for the final posterior probability of the called genotype, the call rate reduces to 94.9% with a higher concordance rate of 99.6%. We also reversed the two independent data sets in their training and testing roles, achieving a concordance rate up to 99.8%. The strength of this APEX chemistry-based platform is its unique redundancy having multiple probes for a single SNP. Our model-based genotype calling algorithm captures the redundancy in the system considering all the underlying probe features of a particular SNP, automatically down-weighting any 'bad data' corresponding to image artifacts on the microarray slide or failure of a specific chemistry. In this regard, our method is able to automatically select the probes which work well and reduce the effect of other so-called bad performing probes in a sample-specific manner, for any number of SNPs.
Bite force measurement system using pressure-sensitive sheet and silicone impression material.
Ando, Katsuya; Fuwa, Yuji; Kurosawa, Masahiro; Kondo, Takamasa; Goto, Shigemi
2009-03-01
This study was conducted to reduce the bias in measured values caused by the thickness of materials used in occlusal examinations. To this end, a silicone impression material for bite force measurement and an experimental model of a simplified stomatognathic system were employed in this study. By means of this experimental model, results showed that the effect of bias toward the posterior arch could be reduced in the anterior-posterior distribution of bite forces and in the occlusal contact areas due to the thickness of the materials used in occlusal examinations.
López-Suárez, Carlos; Gonzalo, Esther; Peláez, Jesús; Rodríguez, Verónica
2015-01-01
Background In recent years there has been an improvement of zirconia ceramic materials to replace posterior missing teeth. To date little in vitro studies has been carried out on the fracture resistance of zirconia veneered posterior fixed dental prostheses. This study investigated the fracture resistance and the failure mode of 3-unit zirconia-based posterior fixed dental prostheses fabricated with two CAD/CAM systems. Material and Methods Twenty posterior fixed dental prostheses were studied. Samples were randomly divided into two groups (n=10 each) according to the zirconia ceramic analyzed: Lava and Procera. Specimens were loaded until fracture under static load. Data were analyzed using Wilcoxon´s rank sum test and Wilcoxon´s signed-rank test (P<0.05). Results Partial fracture of the veneering porcelain occurred in 100% of the samples. Within each group, significant differences were shown between the veneering and the framework fracture resistance (P=0.002). The failure occurred in the connector cervical area in 80% of the cases. Conclusions All fracture load values of the zirconia frameworks could be considered clinically acceptable. The connector area is the weak point of the restorations. Key words:Fixed dental prostheses, zirconium-dioxide, zirconia, fracture resistance, failure mode. PMID:26155341
Prediction of road accidents: A Bayesian hierarchical approach.
Deublein, Markus; Schubert, Matthias; Adey, Bryan T; Köhler, Jochen; Faber, Michael H
2013-03-01
In this paper a novel methodology for the prediction of the occurrence of road accidents is presented. The methodology utilizes a combination of three statistical methods: (1) gamma-updating of the occurrence rates of injury accidents and injured road users, (2) hierarchical multivariate Poisson-lognormal regression analysis taking into account correlations amongst multiple dependent model response variables and effects of discrete accident count data e.g. over-dispersion, and (3) Bayesian inference algorithms, which are applied by means of data mining techniques supported by Bayesian Probabilistic Networks in order to represent non-linearity between risk indicating and model response variables, as well as different types of uncertainties which might be present in the development of the specific models. Prior Bayesian Probabilistic Networks are first established by means of multivariate regression analysis of the observed frequencies of the model response variables, e.g. the occurrence of an accident, and observed values of the risk indicating variables, e.g. degree of road curvature. Subsequently, parameter learning is done using updating algorithms, to determine the posterior predictive probability distributions of the model response variables, conditional on the values of the risk indicating variables. The methodology is illustrated through a case study using data of the Austrian rural motorway network. In the case study, on randomly selected road segments the methodology is used to produce a model to predict the expected number of accidents in which an injury has occurred and the expected number of light, severe and fatally injured road users. Additionally, the methodology is used for geo-referenced identification of road sections with increased occurrence probabilities of injury accident events on a road link between two Austrian cities. It is shown that the proposed methodology can be used to develop models to estimate the occurrence of road accidents for any road network provided that the required data are available. Copyright © 2012 Elsevier Ltd. All rights reserved.
Chen, Ling-Yun; Chen, Jin-Ming; Gituru, Robert Wahiti; Wang, Qing-Feng
2012-03-10
Hydrocharitaceae is a fully aquatic monocot family, consists of 18 genera with approximately 120 species. The family includes both fresh and marine aquatics and exhibits great diversity in form and habit including annual and perennial life histories; submersed, partially submersed and floating leaf habits and linear to orbicular leaf shapes. The family has a cosmopolitan distribution and is well represented in the Tertiary fossil record in Europe. At present, the historical biogeography of the family is not well understood and the generic relationships remain controversial. In this study we investigated the phylogeny and biogeography of Hydrocharitaceae by integrating fossils and DNA sequences from eight genes. We also conducted ancestral state reconstruction for three morphological characters. Phylogenetic analyses produced a phylogeny with most branches strongly supported by bootstrap values greater than 95 and Bayesian posterior probability values of 1.0. Stratiotes is the first diverging lineage with the remaining genera in two clades, one clade consists of Lagarosiphon, Ottelia, Blyxa, Apalanthe, Elodea and Egeria; and the other consists of Hydrocharis-Limnobium, Thalassia, Enhalus, Halophila, Najas, Hydrilla, Vallisneria, Nechamandra and Maidenia. Biogeographic analyses (DIVA, Mesquite) and divergence time estimates (BEAST) resolved the most recent common ancestor of Hydrocharitaceae as being in Asia during the Late Cretaceous and Palaeocene (54.7-72.6 Ma). Dispersals (including long-distance dispersal and migrations through Tethys seaway and land bridges) probably played major roles in the intercontinental distribution of this family. Ancestral state reconstruction suggested that in Hydrocharitaceae evolution of dioecy is bidirectional, viz., from dioecy to hermaphroditism, and from hermaphroditism to dioecy, and that the aerial-submerged leaf habit and short-linear leaf shape are the ancestral states. Our study has shed light on the previously controversial generic phylogeny of Hydrocharitaceae. The study has resolved the historical biogeography of this family and supported dispersal as the most likely explanation for the intercontinental distribution. We have also provided valuable information for understanding the evolution of breeding system and leaf phenotype in aquatic monocots.
Evaluation of the 1077 keV γ-ray emission probability from 68Ga decay
NASA Astrophysics Data System (ADS)
Huang, Xiao-Long; Jiang, Li-Yang; Chen, Xiong-Jun; Chen, Guo-Chang
2014-04-01
68Ga decays to the excited states of 68Zn through the electron capture decay mode. New recommended values for the emission probability of 1077 keV γ-ray given by the ENSDF and DDEP databases all use data from absolute measurements. In 2011, JIANG Li-Yang deduced a new value for 1077 keV γ-ray emission probability by measuring the 69Ga(n,2n) 68Ga reaction cross section. The new value is about 20% lower than values obtained from previous absolute measurements and evaluations. In this paper, the discrepancies among the measurements and evaluations are analyzed carefully and the new values are re-recommended. Our recommended value for the emission probability of 1077 keV γ-ray is (2.72±0.16)%.
Page, Phillip A.; Lamberth, John; Abadie, Ben; Boling, Robert; Collins, Robert; Linton, Russell
1993-01-01
The deceleration phase of the pitching mechanism requires forceful eccentric contraction of the posterior rotator cuff. Because traditional isotonic strengthening may not be specific to this eccentric pattern, a more effective and functional means of strengthening the posterior rotator cuff is needed. Twelve collegiate baseball pitchers performed a moderate intensity isotonic dumbbell strengthening routine for 6 weeks. Six of the 12 subjects were randomly assigned to an experimental group and placed on a Theraband® Elastic Band strengthening routine in a functional-diagonal pattern to emphasize the eccentric contraction of the posterior rotator cuff, in addition to the isotonic routine. The control group (n = 6) performed only the isotonic exercises. Both groups were evaluated on a KIN-COM® isokinetic dynamometer in a functional diagonal pattern. Pretest and posttest average eccentric force production of the posterior rotator cuff was compared at two speeds, 60 and 180°/s. Data were analyzed with an analysis of covariance at the .05 level with significance at 60°/s. Values at 180°/s, however, were not significant. Eccentric force production at 60°/s increased more during training in the experimental group (+19.8%) than in the control group (-1.6%). There was no difference in the two groups at 180°/s; both decreased (8 to 15%). Theraband was effective at 60°/s in functional eccentric strengthening of the posterior rotator cuff in the pitching shoulder. ImagesFig 1. PMID:16558251
Mayama, Michinori; Uno, Kaname; Tano, Sho; Yoshihara, Masato; Ukai, Mayu; Kishigami, Yasuyuki; Ito, Yasuhiro; Oguchi, Hidenori
2016-08-01
Posterior reversible encephalopathy syndrome is observed frequently in patients with eclampsia; however, it has also been reported in some patients with preeclampsia. The aim of this study was to determine the incidence of posterior reversible encephalopathy syndrome in patients with preeclampsia and eclampsia and to assess whether these 2 patient groups share similar pathophysiologic backgrounds by comparing clinical and radiologic characteristics. This was a retrospective cohort study of 4849 pregnant patients. A total of 49 patients with eclampsia and preeclampsia and with neurologic symptoms underwent magnetic resonance imaging and magnetic resonance angiography; 10 patients were excluded from further analysis because of a history of epilepsy or dissociative disorder. The age, parity, blood pressure, and routine laboratory data at the onset of symptoms were also recorded. Among 39 patients with neurologic symptoms, 12 of 13 patients with eclampsia (92.3%) and 5 of 26 patients with preeclampsia (19.2%) experienced the development of posterior reversible encephalopathy syndrome. Whereas age and blood pressure at onset were not significantly different between patients with and without encephalopathy, hematocrit, serum creatinine, aspartate transaminase, alanine transaminase, and lactate dehydrogenase values were significantly higher in patients with posterior reversible encephalopathy syndrome than in those without magnetic resonance imaging abnormalities. In contrast, patients with eclampsia with posterior reversible encephalopathy syndrome did not show any significant differences in clinical and laboratory data compared with patients with preeclampsia with posterior reversible encephalopathy syndrome. In addition to the parietooccipital regions, atypical regions (such as the frontal and temporal lobes), and basal ganglia were also involved in patients with eclampsia and patients with preeclampsia with posterior reversible encephalopathy syndrome. Finally, intraparenchymal hemorrhage was detected in 1 patient with eclampsia, and subarachnoid hemorrhage was observed in 1 patient with preeclampsia. Although the incidence of posterior reversible encephalopathy syndrome was high in patients with eclampsia, nearly 20% of the patients with preeclampsia with neurologic symptoms also experienced posterior reversible encephalopathy syndrome. The similarities in clinical and radiologic findings of posterior reversible encephalopathy syndrome between the 2 groups support the hypothesis that these 2 patient groups have a shared pathophysiologic background. Thus, magnetic resonance imaging studies should be considered for patients with the recent onset of neurologic symptoms, regardless of the development of eclampsia. Copyright © 2016 Elsevier Inc. All rights reserved.
Bhadri, Prashant R; Rowley, Adrian P; Khurana, Rahul N; Deboer, Charles M; Kerns, Ralph M; Chong, Lawrence P; Humayun, Mark S
2007-05-01
To evaluate the effectiveness of a prototype stereoscopic camera-based viewing system (Digital Microsurgical Workstation, three-dimensional (3D) Vision Systems, Irvine, California, USA) for anterior and posterior segment ophthalmic surgery. Institutional-based prospective study. Anterior and posterior segment surgeons performed designated standardized tasks on porcine eyes after training on prosthetic plastic eyes. Both anterior and posterior segment surgeons were able to complete tasks requiring minimal or moderate stereoscopic viewing. The results indicate that the system provides improved ergonomics. Improvements in key viewing performance areas would further enhance the value over a conventional operating microscope. The performance of the prototype system is not at par with the planned commercial system. With continued development of this technology, the three- dimensional system may be a novel viewing system in ophthalmic surgery with improved ergonomics with respect to traditional microscopic viewing.
A specific role for posterior dorsolateral striatum in human habit learning
Tricomi, Elizabeth; Balleine, Bernard W.; O’Doherty, John P.
2009-01-01
Habits are characterized by an insensitivity to their consequences and, as such, can be distinguished from goal-directed actions. The neural basis of the development of demonstrably outcome insensitive habitual actions in humans has not been previously characterized. In this experiment, we show that extensive training on a free-operant task reduces the sensitivity of participants’ behavior to a reduction in outcome value. Analysis of functional magnetic resonance imagine (fMRI) data acquired during training revealed a significant increase in task-related cue sensitivity in a right posterior putamen/globus pallidus region as training progressed. These results provide evidence for a shift from goal-directed to habit-based control of instrumental actions in humans, and suggest that cue-driven activation in a specific region of dorsolateral posterior putamen may contribute to the habitual control of behavior in humans. PMID:19490086
Nonmetric traits of permanent posterior teeth in Kerala population: A forensic overview
Baby, Tibin K; Sunil, S; Babu, Sharlene Sara
2017-01-01
Introduction: Dental morphology is a highly heritable characteristic which is stable with time and has a fairly high state of preservation. Nonmetric dental traits have crucial role in ethnic classifications of a population that helps in forensic racial identification purposes. Aims and Objectives: To determine the frequency and variability of possible nonmetric tooth traits using extracted permanent posterior teeth from Kerala population for discerning racial ethnicity. Materials and Methods: This qualitative, cross-sectional study was carried out using 1743 extracted intact permanent posterior teeth collected from different dental clinics situated all over Kerala. Results: The more common features on premolars were multiple lingual cusps (31.21%), distal accessary ridges (16.28%) and Tom's root (17.9%). In upper first molars, Carabelli trait expression was 17.78% and other common features included metaconulo, cusp 5 and enamel extensions. Conclusion: Posterior tooth traits had variable expression in the study population. Low prevalence rate of Carabelli trait in this study is characteristic of Asian population. This research explored new elements of invaluable tooth traits values to understand racial ethnicity of Kerala population. PMID:28932045
Glenohumeral internal rotation deficit in throwing athletes: current perspectives
Rose, Michael B; Noonan, Thomas
2018-01-01
Glenohumeral internal rotation deficit (GIRD) is an adaptive process in which the throwing shoulder experiences a loss of internal rotation (IR). GIRD has most commonly been defined by a loss of >20° of IR compared to the contralateral shoulder. Total rotational motion of the shoulder is the sum of internal and external rotation and may be more important than the absolute value of IR loss. Pathologic GIRD has been defined as a loss of IR combined with a loss of total rotational motion. The leading pathologic process in GIRD is posterior capsular and rotator-cuff tightness, due to the repetitive cocking that occurs with the overhead throwing motion. GIRD has been associated with numerous pathologic conditions, including posterior superior labral tears, partial articular-sided rotator-cuff tears, and superior labral anterior-to-posterior tears. The mainstay of treatment for patients with GIRD is posterior capsular stretching and strengthening to improve scapular mechanics. In patients who fail nonoperative therapy, shoulder arthroscopy can be performed. Arthroscopic surgery in the high-level throwing athlete should be to restore them to their functional baseline with the minimum amount of intervention possible. PMID:29593438
Shi, Xiaojun; Shen, Bin; Kang, Pengde; Yang, Jing; Zhou, Zongke; Pei, Fuxing
2013-12-01
To evaluate and quantify the effect of the tibial slope on the postoperative maximal knee flexion and stability in the posterior-stabilized total knee arthroplasty (TKA). Fifty-six patients (65 knees) who had undergone TKA with the posterior-stabilized prostheses were divided into the following 3 groups according to the measured tibial slopes: Group 1: ≤4°, Group 2: 4°-7° and Group 3: >7°. The preoperative range of the motion, the change in the posterior condylar offset, the elevation of the joint line, the postoperative tibiofemoral angle and the preoperative and postoperative Hospital for Special Surgery (HSS) scores were recorded. The tibial anteroposterior translation was measured using the Kneelax 3 Arthrometer at both the 30° and the 90° flexion angles. The mean values of the postoperative maximal knee flexion were 101° (SD 5), 106° (SD 5) and 113° (SD 9) in Groups 1, 2 and 3, respectively. A significant difference was found in the postoperative maximal flexion between the 3 groups (P < 0.001). However, no significant differences were found between the 3 groups in the postoperative HSS scores, the changes in the posterior condylar offset, the elevation of the joint line or the tibial anteroposterior translation at either the 30° or the 90° flexion angles. A 1° increase in the tibial slope resulted in a 1.8° flexion increment (r = 1.8, R (2) = 0.463, P < 0.001). An increase in the posterior tibial slope can significantly increase the postoperative maximal knee flexion. The tibial slope with an appropriate flexion and extension gap balance during the operation does not affect the joint stability.
Tykocki, Tomasz; Kostkiewicz, Bogusław
2014-09-01
Intracranial aneurysms (IAs) located in the posterior circulation are considered to have higher annual bleed rates than those in the anterior circulation. The aim of the study was to compare the morphometric factors differentiating between IAs located in the anterior and posterior cerebral circulation. A total number of 254 IAs diagnosed between 2009 and 2012 were retrospectively analyzed. All patients qualified for diagnostic, three-dimensional rotational angiography. IAs were assigned to either the anterior or posterior cerebral circulation subsets for the analysis. Means were compared with a t-test. The univariate and stepwise logistic regression analyses were used to determine the predictors of morphometric differences between the groups. For the defined predictors, ROC (receiver-operating characteristic) curves and interactive dot diagrams were calculated with the cutoff values of the morphometric factors. The number of anterior cerebral circulation IAs was 179 (70.5 %); 141 (55.5 %) aneurysms were ruptured. Significant differences between anterior and posterior circulation IAs were found for: the parent artery size (5.08 ± 1.8 mm vs. 3.95 ± 1.5 mm; p < 0.05), size ratio (2.22 ± 0.9 vs. 3.19 ± 1.8; p < 0.045) and aspect ratio (AR) (1.91 ± 0.8 vs. 2.75 ± 1.8; p = 0.02). Predicting factors differentiating anterior and posterior circulation IAs were: the AR (OR = 2.20; 95 % CI 1.80-270; Is 270 correct or should it be 2.70 and parent artery size (OR = 0.44; 95 % CI 0.38-0.54). The cutoff point in the ROC curve was 2.185 for the AR and 4.89 mm for parent artery size. Aspect ratio and parent artery size were found to be predictive morphometric factors in differentiating between anterior and posterior cerebral IAs.
Yeung, Carol K.L.; Tsai, Pi-Wen; Chesser, R. Terry; Lin, Rong-Chien; Yao, Cheng-Te; Tian, Xiu-Hua; Li, Shou-Hsien
2011-01-01
Although founder effect speciation has been a popular theoretical model for the speciation of geographically isolated taxa, its empirical importance has remained difficult to evaluate due to the intractability of past demography, which in a founder effect speciation scenario would involve a speciational bottleneck in the emergent species and the complete cessation of gene flow following divergence. Using regression-weighted approximate Bayesian computation, we tested the validity of these two fundamental conditions of founder effect speciation in a pair of sister species with disjunct distributions: the royal spoonbill Platalea regia in Australasia and the black-faced spoonbill Pl. minor in eastern Asia. When compared with genetic polymorphism observed at 20 nuclear loci in the two species, simulations showed that the founder effect speciation model had an extremely low posterior probability (1.55 × 10-8) of producing the extant genetic pattern. In contrast, speciation models that allowed for postdivergence gene flow were much more probable (posterior probabilities were 0.37 and 0.50 for the bottleneck with gene flow and the gene flow models, respectively) and postdivergence gene flow persisted for a considerable period of time (more than 80% of the divergence history in both models) following initial divergence (median = 197,000 generations, 95% credible interval [CI]: 50,000-478,000, for the bottleneck with gene flow model; and 186,000 generations, 95% CI: 45,000-477,000, for the gene flow model). Furthermore, the estimated population size reduction in Pl. regia to 7,000 individuals (median, 95% CI: 487-12,000, according to the bottleneck with gene flow model) was unlikely to have been severe enough to be considered a bottleneck. Therefore, these results do not support founder effect speciation in Pl. regia but indicate instead that the divergence between Pl. regia and Pl. minor was probably driven by selection despite continuous gene flow. In this light, we discuss the potential importance of evolutionarily labile traits with significant fitness consequences, such as migratory behavior and habitat preference, in facilitating divergence of the spoonbills.
Assessing the Probability that a Finding Is Genuine for Large-Scale Genetic Association Studies
Kuo, Chia-Ling; Vsevolozhskaya, Olga A.; Zaykin, Dmitri V.
2015-01-01
Genetic association studies routinely involve massive numbers of statistical tests accompanied by P-values. Whole genome sequencing technologies increased the potential number of tested variants to tens of millions. The more tests are performed, the smaller P-value is required to be deemed significant. However, a small P-value is not equivalent to small chances of a spurious finding and significance thresholds may fail to serve as efficient filters against false results. While the Bayesian approach can provide a direct assessment of the probability that a finding is spurious, its adoption in association studies has been slow, due in part to the ubiquity of P-values and the automated way they are, as a rule, produced by software packages. Attempts to design simple ways to convert an association P-value into the probability that a finding is spurious have been met with difficulties. The False Positive Report Probability (FPRP) method has gained increasing popularity. However, FPRP is not designed to estimate the probability for a particular finding, because it is defined for an entire region of hypothetical findings with P-values at least as small as the one observed for that finding. Here we propose a method that lets researchers extract probability that a finding is spurious directly from a P-value. Considering the counterpart of that probability, we term this method POFIG: the Probability that a Finding is Genuine. Our approach shares FPRP's simplicity, but gives a valid probability that a finding is spurious given a P-value. In addition to straightforward interpretation, POFIG has desirable statistical properties. The POFIG average across a set of tentative associations provides an estimated proportion of false discoveries in that set. POFIGs are easily combined across studies and are immune to multiple testing and selection bias. We illustrate an application of POFIG method via analysis of GWAS associations with Crohn's disease. PMID:25955023
Assessing the Probability that a Finding Is Genuine for Large-Scale Genetic Association Studies.
Kuo, Chia-Ling; Vsevolozhskaya, Olga A; Zaykin, Dmitri V
2015-01-01
Genetic association studies routinely involve massive numbers of statistical tests accompanied by P-values. Whole genome sequencing technologies increased the potential number of tested variants to tens of millions. The more tests are performed, the smaller P-value is required to be deemed significant. However, a small P-value is not equivalent to small chances of a spurious finding and significance thresholds may fail to serve as efficient filters against false results. While the Bayesian approach can provide a direct assessment of the probability that a finding is spurious, its adoption in association studies has been slow, due in part to the ubiquity of P-values and the automated way they are, as a rule, produced by software packages. Attempts to design simple ways to convert an association P-value into the probability that a finding is spurious have been met with difficulties. The False Positive Report Probability (FPRP) method has gained increasing popularity. However, FPRP is not designed to estimate the probability for a particular finding, because it is defined for an entire region of hypothetical findings with P-values at least as small as the one observed for that finding. Here we propose a method that lets researchers extract probability that a finding is spurious directly from a P-value. Considering the counterpart of that probability, we term this method POFIG: the Probability that a Finding is Genuine. Our approach shares FPRP's simplicity, but gives a valid probability that a finding is spurious given a P-value. In addition to straightforward interpretation, POFIG has desirable statistical properties. The POFIG average across a set of tentative associations provides an estimated proportion of false discoveries in that set. POFIGs are easily combined across studies and are immune to multiple testing and selection bias. We illustrate an application of POFIG method via analysis of GWAS associations with Crohn's disease.
Sakallioğlu, Oner; Düzer, Sertaç; Kapusuz, Zeliha
2014-01-01
The aim of our study was to investigate the efficiacy of the suturation technique after completing the tonsillectomy procedure for posttonsillectomy pain control in adult patients. August 2010-February 2011, 44 adult patients, ages ranged from 16 to 41 years old who underwent tonsillectomy at Elaziğ Training and Research Hospital Otorhinolaryngology Clinic were included to the study. After tonsillectomy procedure, anterior and posterior tonsillar archs were sutured each other and so, the area of tonsillectomy lodges which covered with mucosa were increased. Twenty two patients who applied posttonsillectomy suturation were used as study group and remnant 22 patients who did not applied posttonsillectomy suturation were used as control group. The visual analogue score (VAS) was used to evaluate the postoperative pain degree (0 no pain, 10 worst pain). ANOVA test (two ways classification with repeated measures) was used for statistical analysis of VAS values. P < 0.05 was accepted as statistically significant. The effect of time (each post-operative day) on VAS values was significant. The mean VAS values between study and control group on post-operative day 1st, 3rd, 7th, and 10th were statistically significant (P < 0.05). The severity of posttonsillectomy pain was less in study group patients than control group patients. The suturation of anterior and posterior tonsillar archs after tonsillectomy procedure was found effective to alleviate the posttonsillectomy pain in adult patients.
Clustering and variable selection in the presence of mixed variable types and missing data.
Storlie, C B; Myers, S M; Katusic, S K; Weaver, A L; Voigt, R G; Croarkin, P E; Stoeckel, R E; Port, J D
2018-05-17
We consider the problem of model-based clustering in the presence of many correlated, mixed continuous, and discrete variables, some of which may have missing values. Discrete variables are treated with a latent continuous variable approach, and the Dirichlet process is used to construct a mixture model with an unknown number of components. Variable selection is also performed to identify the variables that are most influential for determining cluster membership. The work is motivated by the need to cluster patients thought to potentially have autism spectrum disorder on the basis of many cognitive and/or behavioral test scores. There are a modest number of patients (486) in the data set along with many (55) test score variables (many of which are discrete valued and/or missing). The goal of the work is to (1) cluster these patients into similar groups to help identify those with similar clinical presentation and (2) identify a sparse subset of tests that inform the clusters in order to eliminate unnecessary testing. The proposed approach compares very favorably with other methods via simulation of problems of this type. The results of the autism spectrum disorder analysis suggested 3 clusters to be most likely, while only 4 test scores had high (>0.5) posterior probability of being informative. This will result in much more efficient and informative testing. The need to cluster observations on the basis of many correlated, continuous/discrete variables with missing values is a common problem in the health sciences as well as in many other disciplines. Copyright © 2018 John Wiley & Sons, Ltd.
Regional cerebral blood flow in childhood headache
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roach, E.S.; Stump, D.A.
1989-06-01
Regional cerebral blood flow (rCBF) was measured in 16 cranial regions in 23 children and adolescents with frequent headaches using the non-invasive Xenon-133 inhalation technique. Blood flow response to 5% carbon dioxide (CO2) was also determined in 21 patients, while response to 50% oxygen was measured in the two patients with hemoglobinopathy. Included were 10 patients with a clinical diagnosis of migraine, 4 with musculoskeletal headaches, and 3 with features of both types. Also studied were 2 patients with primary thrombocythemia, 2 patients with hemoglobinopathy and headaches, 1 patient with polycythemia, and 1 with headaches following trauma. With two exceptions,more » rCBF determinations were done during an asymptomatic period. Baseline rCBF values tended to be higher in these young patients than in young adults done in our laboratory. Localized reduction in the expected blood flow surge after CO2 inhalation, most often noted posteriorly, was seen in 8 of the 13 vascular headaches, but in none of the musculoskeletal headache group. Both patients with primary thrombocythemia had normal baseline flow values and altered responsiveness to CO2 similar to that seen in migraineurs; thus, the frequently reported headache and transient neurologic signs with primary thrombocythemia are probably not due to microvascular obstruction as previously suggested. These data support the concept of pediatric migraine as a disorder of vasomotor function and also add to our knowledge of normal rCBF values in younger patients. Demonstration of altered vasomotor reactivity to CO2 could prove helpful in children whose headache is atypical.« less