NASA Astrophysics Data System (ADS)
Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro
2003-06-01
In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.
MCMC multilocus lod scores: application of a new approach.
George, Andrew W; Wijsman, Ellen M; Thompson, Elizabeth A
2005-01-01
On extended pedigrees with extensive missing data, the calculation of multilocus likelihoods for linkage analysis is often beyond the computational bounds of exact methods. Growing interest therefore surrounds the implementation of Monte Carlo estimation methods. In this paper, we demonstrate the speed and accuracy of a new Markov chain Monte Carlo method for the estimation of linkage likelihoods through an analysis of real data from a study of early-onset Alzheimer's disease. For those data sets where comparison with exact analysis is possible, we achieved up to a 100-fold increase in speed. Our approach is implemented in the program lm_bayes within the framework of the freely available MORGAN 2.6 package for Monte Carlo genetic analysis (http://www.stat.washington.edu/thompson/Genepi/MORGAN/Morgan.shtml).
A New Monte Carlo Method for Estimating Marginal Likelihoods.
Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O
2018-06-01
Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2012-01-01
We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.
Exploring Neutrino Oscillation Parameter Space with a Monte Carlo Algorithm
NASA Astrophysics Data System (ADS)
Espejel, Hugo; Ernst, David; Cogswell, Bernadette; Latimer, David
2015-04-01
The χ2 (or likelihood) function for a global analysis of neutrino oscillation data is first calculated as a function of the neutrino mixing parameters. A computational challenge is to obtain the minima or the allowed regions for the mixing parameters. The conventional approach is to calculate the χ2 (or likelihood) function on a grid for a large number of points, and then marginalize over the likelihood function. As the number of parameters increases with the number of neutrinos, making the calculation numerically efficient becomes necessary. We implement a new Monte Carlo algorithm (D. Foreman-Mackey, D. W. Hogg, D. Lang and J. Goodman, Publications of the Astronomical Society of the Pacific, 125 306 (2013)) to determine its computational efficiency at finding the minima and allowed regions. We examine a realistic example to compare the historical and the new methods.
Application of the Bootstrap Methods in Factor Analysis.
ERIC Educational Resources Information Center
Ichikawa, Masanori; Konishi, Sadanori
1995-01-01
A Monte Carlo experiment was conducted to investigate the performance of bootstrap methods in normal theory maximum likelihood factor analysis when the distributional assumption was satisfied or unsatisfied. Problems arising with the use of bootstrap methods are highlighted. (SLD)
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
1998-01-01
Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
Hey, Jody; Nielsen, Rasmus
2007-01-01
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
Survivorship analysis when cure is a possibility: a Monte Carlo study.
Goldman, A I
1984-01-01
Parametric survivorship analyses of clinical trials commonly involves the assumption of a hazard function constant with time. When the empirical curve obviously levels off, one can modify the hazard function model by use of a Gompertz or Weibull distribution with hazard decreasing over time. Some cancer treatments are thought to cure some patients within a short time of initiation. Then, instead of all patients having the same hazard, decreasing over time, a biologically more appropriate model assumes that an unknown proportion (1 - pi) have constant high risk whereas the remaining proportion (pi) have essentially no risk. This paper discusses the maximum likelihood estimation of pi and the power curves of the likelihood ratio test. Monte Carlo studies provide results for a variety of simulated trials; empirical data illustrate the methods.
ERIC Educational Resources Information Center
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Methods and Tools for Evaluating Uncertainty in Ecological Models: A Survey
Poster presented at the Ecological Society of America Meeting. Ecologists are familiar with a variety of uncertainty techniques, particularly in the intersection of maximum likelihood parameter estimation and Monte Carlo analysis techniques, as well as a recent increase in Baye...
ERIC Educational Resources Information Center
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun
2002-01-01
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
Markov Chain Monte Carlo: an introduction for epidemiologists
Hamra, Ghassan; MacLehose, Richard; Richardson, David
2013-01-01
Markov Chain Monte Carlo (MCMC) methods are increasingly popular among epidemiologists. The reason for this may in part be that MCMC offers an appealing approach to handling some difficult types of analyses. Additionally, MCMC methods are those most commonly used for Bayesian analysis. However, epidemiologists are still largely unfamiliar with MCMC. They may lack familiarity either with he implementation of MCMC or with interpretation of the resultant output. As with tutorials outlining the calculus behind maximum likelihood in previous decades, a simple description of the machinery of MCMC is needed. We provide an introduction to conducting analyses with MCMC, and show that, given the same data and under certain model specifications, the results of an MCMC simulation match those of methods based on standard maximum-likelihood estimation (MLE). In addition, we highlight examples of instances in which MCMC approaches to data analysis provide a clear advantage over MLE. We hope that this brief tutorial will encourage epidemiologists to consider MCMC approaches as part of their analytic tool-kit. PMID:23569196
Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module
NASA Astrophysics Data System (ADS)
Martinez, Gregory D.; McKay, James; Farmer, Ben; Scott, Pat; Roebber, Elinore; Putze, Antje; Conrad, Jan
2017-11-01
We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics.
ERIC Educational Resources Information Center
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation…
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Maximum Likelihood Analysis in the PEN Experiment
NASA Astrophysics Data System (ADS)
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
ERIC Educational Resources Information Center
Chung, Hwan; Anthony, James C.
2013-01-01
This article presents a multiple-group latent class-profile analysis (LCPA) by taking a Bayesian approach in which a Markov chain Monte Carlo simulation is employed to achieve more robust estimates for latent growth patterns. This article describes and addresses a label-switching problem that involves the LCPA likelihood function, which has…
Monte Carlo-based Reconstruction in Water Cherenkov Detectors using Chroma
NASA Astrophysics Data System (ADS)
Seibert, Stanley; Latorre, Anthony
2012-03-01
We demonstrate the feasibility of event reconstruction---including position, direction, energy and particle identification---in water Cherenkov detectors with a purely Monte Carlo-based method. Using a fast optical Monte Carlo package we have written, called Chroma, in combination with several variance reduction techniques, we can estimate the value of a likelihood function for an arbitrary event hypothesis. The likelihood can then be maximized over the parameter space of interest using a form of gradient descent designed for stochastic functions. Although slower than more traditional reconstruction algorithms, this completely Monte Carlo-based technique is universal and can be applied to a detector of any size or shape, which is a major advantage during the design phase of an experiment. As a specific example, we focus on reconstruction results from a simulation of the 200 kiloton water Cherenkov far detector option for LBNE.
Computational tools for exact conditional logistic regression.
Corcoran, C; Mehta, C; Patel, N; Senchaudhuri, P
Logistic regression analyses are often challenged by the inability of unconditional likelihood-based approximations to yield consistent, valid estimates and p-values for model parameters. This can be due to sparseness or separability in the data. Conditional logistic regression, though useful in such situations, can also be computationally unfeasible when the sample size or number of explanatory covariates is large. We review recent developments that allow efficient approximate conditional inference, including Monte Carlo sampling and saddlepoint approximations. We demonstrate through real examples that these methods enable the analysis of significantly larger and more complex data sets. We find in this investigation that for these moderately large data sets Monte Carlo seems a better alternative, as it provides unbiased estimates of the exact results and can be executed in less CPU time than can the single saddlepoint approximation. Moreover, the double saddlepoint approximation, while computationally the easiest to obtain, offers little practical advantage. It produces unreliable results and cannot be computed when a maximum likelihood solution does not exist. Copyright 2001 John Wiley & Sons, Ltd.
2015-06-18
Engineering Effectiveness Survey. CMU/SEI-2012-SR-009. Carnegie Mellon University. November 2012. Field, Andy. Discovering Statistics Using SPSS , 3rd...enough into the survey to begin answering questions on risk practices. All of the data statistical analysis will be performed using SPSS . Prior to...probabilistically using distributions for likelihood and impact. Statistical methods like Monte Carlo can more comprehensively evaluate the cost and
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-06-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Cheong, R. Y.; Gabda, D.
2017-09-01
Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.
Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.
ERIC Educational Resources Information Center
Ramsay, J. O.
1980-01-01
Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)
COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION FOR BANDED PROBABILITY DISTRIBUTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gjerløw, E.; Mikkelsen, K.; Eriksen, H. K.
We investigate sets of random variables that can be arranged sequentially such that a given variable only depends conditionally on its immediate predecessor. For such sets, we show that the full joint probability distribution may be expressed exclusively in terms of uni- and bivariate marginals. Under the assumption that the cosmic microwave background (CMB) power spectrum likelihood only exhibits correlations within a banded multipole range, Δl{sub C}, we apply this expression to two outstanding problems in CMB likelihood analysis. First, we derive a statistically well-defined hybrid likelihood estimator, merging two independent (e.g., low- and high-l) likelihoods into a single expressionmore » that properly accounts for correlations between the two. Applying this expression to the Wilkinson Microwave Anisotropy Probe (WMAP) likelihood, we verify that the effect of correlations on cosmological parameters in the transition region is negligible in terms of cosmological parameters for WMAP; the largest relative shift seen for any parameter is 0.06σ. However, because this may not hold for other experimental setups (e.g., for different instrumental noise properties or analysis masks), but must rather be verified on a case-by-case basis, we recommend our new hybridization scheme for future experiments for statistical self-consistency reasons. Second, we use the same expression to improve the convergence rate of the Blackwell-Rao likelihood estimator, reducing the required number of Monte Carlo samples by several orders of magnitude, and thereby extend it to high-l applications.« less
Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data
NASA Astrophysics Data System (ADS)
Glüsenkamp, Thorsten
2018-06-01
Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.
Monte Carlo Shower Counter Studies
NASA Technical Reports Server (NTRS)
Snyder, H. David
1991-01-01
Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.
Unified framework to evaluate panmixia and migration direction among multiple sampling locations.
Beerli, Peter; Palczewski, Michal
2010-05-01
For many biological investigations, groups of individuals are genetically sampled from several geographic locations. These sampling locations often do not reflect the genetic population structure. We describe a framework using marginal likelihoods to compare and order structured population models, such as testing whether the sampling locations belong to the same randomly mating population or comparing unidirectional and multidirectional gene flow models. In the context of inferences employing Markov chain Monte Carlo methods, the accuracy of the marginal likelihoods depends heavily on the approximation method used to calculate the marginal likelihood. Two methods, modified thermodynamic integration and a stabilized harmonic mean estimator, are compared. With finite Markov chain Monte Carlo run lengths, the harmonic mean estimator may not be consistent. Thermodynamic integration, in contrast, delivers considerably better estimates of the marginal likelihood. The choice of prior distributions does not influence the order and choice of the better models when the marginal likelihood is estimated using thermodynamic integration, whereas with the harmonic mean estimator the influence of the prior is pronounced and the order of the models changes. The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.
ERIC Educational Resources Information Center
Klein, Andreas G.; Muthen, Bengt O.
2007-01-01
In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…
Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures
ERIC Educational Resources Information Center
Atar, Burcu; Kamata, Akihito
2011-01-01
The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…
Collinear Latent Variables in Multilevel Confirmatory Factor Analysis
van de Schoot, Rens; Hox, Joop
2014-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827
Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.
2006-01-01
The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…
A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data.
Liang, Faming; Kim, Jinsu; Song, Qifan
2016-01-01
Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively.
A Bootstrap Metropolis–Hastings Algorithm for Bayesian Analysis of Big Data
Kim, Jinsu; Song, Qifan
2016-01-01
Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively. PMID:29033469
Sustainability likelihood of remediation options for metal-contaminated soil/sediment.
Chen, Season S; Taylor, Jessica S; Baek, Kitae; Khan, Eakalak; Tsang, Daniel C W; Ok, Yong Sik
2017-05-01
Multi-criteria analysis and detailed impact analysis were carried out to assess the sustainability of four remedial alternatives for metal-contaminated soil/sediment at former timber treatment sites and harbour sediment with different scales. The sustainability was evaluated in the aspects of human health and safety, environment, stakeholder concern, and land use, under four different scenarios with varying weighting factors. The Monte Carlo simulation was performed to reveal the likelihood of accomplishing sustainable remediation with different treatment options at different sites. The results showed that in-situ remedial technologies were more sustainable than ex-situ ones, where in-situ containment demonstrated both the most sustainable result and the highest probability to achieve sustainability amongst the four remedial alternatives in this study, reflecting the lesser extent of off-site and on-site impacts. Concerns associated with ex-situ options were adverse impacts tied to all four aspects and caused by excavation, extraction, and off-site disposal. The results of this study suggested the importance of considering the uncertainties resulting from the remedial options (i.e., stochastic analysis) in addition to the overall sustainability scores (i.e., deterministic analysis). The developed framework and model simulation could serve as an assessment for the sustainability likelihood of remedial options to ensure sustainable remediation of contaminated sites. Copyright © 2017 Elsevier Ltd. All rights reserved.
Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.
dos Reis, Mario; Yang, Ziheng
2011-07-01
The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.
Pet-Armacost, J J; Sepulveda, J; Sakude, M
1999-12-01
The US Department of Transportation was interested in the risks associated with transporting Hydrazine in tanks with and without relief devices. Hydrazine is both highly toxic and flammable, as well as corrosive. Consequently, there was a conflict as to whether a relief device should be used or not. Data were not available on the impact of relief devices on release probabilities or the impact of Hydrazine on the likelihood of fires and explosions. In this paper, a Monte Carlo sensitivity analysis of the unknown parameters was used to assess the risks associated with highway transport of Hydrazine. To help determine whether or not relief devices should be used, fault trees and event trees were used to model the sequences of events that could lead to adverse consequences during transport of Hydrazine. The event probabilities in the event trees were derived as functions of the parameters whose effects were not known. The impacts of these parameters on the risk of toxic exposures, fires, and explosions were analyzed through a Monte Carlo sensitivity analysis and analyzed statistically through an analysis of variance. The analysis allowed the determination of which of the unknown parameters had a significant impact on the risks. It also provided the necessary support to a critical transportation decision even though the values of several key parameters were not known.
COSMOABC: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Ishida, E. E. O.; Vitenti, S. D. P.; Penna-Lima, M.; Cisewski, J.; de Souza, R. S.; Trindade, A. M. M.; Cameron, E.; Busti, V. C.; COIN Collaboration
2015-11-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present COSMOABC, a Python ABC sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled COSMOABC with the NUMCOSMO library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. COSMOABC is published under the GPLv3 license on PyPI and GitHub and documentation is available at http://goo.gl/SmB8EX.
Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R
2014-12-01
Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.
Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.
Xin, Cao; Chongshi, Gu
2016-01-01
Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.
NASA Astrophysics Data System (ADS)
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
The Joker: A custom Monte Carlo sampler for binary-star and exoplanet radial velocity data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-01-01
Given sparse or low-quality radial-velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and MCMC posterior sampling over the orbital parameters. The Joker is a custom-built Monte Carlo sampler that can produce a posterior sampling for orbital parameters given sparse or noisy radial-velocity measurements, even when the likelihood function is poorly behaved. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still highly informative and can be used in hierarchical (population) modeling.
Markov chain Monte Carlo estimation of quantum states
NASA Astrophysics Data System (ADS)
Diguglielmo, James; Messenger, Chris; Fiurášek, Jaromír; Hage, Boris; Samblowski, Aiko; Schmidt, Tabea; Schnabel, Roman
2009-03-01
We apply a Bayesian data analysis scheme known as the Markov chain Monte Carlo to the tomographic reconstruction of quantum states. This method yields a vector, known as the Markov chain, which contains the full statistical information concerning all reconstruction parameters including their statistical correlations with no a priori assumptions as to the form of the distribution from which it has been obtained. From this vector we can derive, e.g., the marginal distributions and uncertainties of all model parameters, and also of other quantities such as the purity of the reconstructed state. We demonstrate the utility of this scheme by reconstructing the Wigner function of phase-diffused squeezed states. These states possess non-Gaussian statistics and therefore represent a nontrivial case of tomographic reconstruction. We compare our results to those obtained through pure maximum-likelihood and Fisher information approaches.
NASA Astrophysics Data System (ADS)
Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen
2017-06-01
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.
MC3: Multi-core Markov-chain Monte Carlo code
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan
2016-10-01
MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.
Link, William A; Barker, Richard J
2005-03-01
We present a hierarchical extension of the Cormack-Jolly-Seber (CJS) model for open population capture-recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis-Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.
Link, William A.; Barker, Richard J.
2005-01-01
We present a hierarchical extension of the Cormack–Jolly–Seber (CJS) model for open population capture–recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis–Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.
NASA Astrophysics Data System (ADS)
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components
NASA Technical Reports Server (NTRS)
Abernethy, K.
1986-01-01
The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.
Little, Mark P; Kwon, Deukwoo; Zablotska, Lydia B; Brenner, Alina V; Cahoon, Elizabeth K; Rozhko, Alexander V; Polyanskaya, Olga N; Minenko, Victor F; Golovanov, Ivan; Bouville, André; Drozdovitch, Vladimir
2015-01-01
The excess incidence of thyroid cancer in Ukraine and Belarus observed a few years after the Chernobyl accident is considered to be largely the result of 131I released from the reactor. Although the Belarus thyroid cancer prevalence data has been previously analyzed, no account was taken of dose measurement error. We examined dose-response patterns in a thyroid screening prevalence cohort of 11,732 persons aged under 18 at the time of the accident, diagnosed during 1996-2004, who had direct thyroid 131I activity measurement, and were resident in the most radio-actively contaminated regions of Belarus. Three methods of dose-error correction (regression calibration, Monte Carlo maximum likelihood, Bayesian Markov Chain Monte Carlo) were applied. There was a statistically significant (p<0.001) increasing dose-response for prevalent thyroid cancer, irrespective of regression-adjustment method used. Without adjustment for dose errors the excess odds ratio was 1.51 Gy- (95% CI 0.53, 3.86), which was reduced by 13% when regression-calibration adjustment was used, 1.31 Gy- (95% CI 0.47, 3.31). A Monte Carlo maximum likelihood method yielded an excess odds ratio of 1.48 Gy- (95% CI 0.53, 3.87), about 2% lower than the unadjusted analysis. The Bayesian method yielded a maximum posterior excess odds ratio of 1.16 Gy- (95% BCI 0.20, 4.32), 23% lower than the unadjusted analysis. There were borderline significant (p = 0.053-0.078) indications of downward curvature in the dose response, depending on the adjustment methods used. There were also borderline significant (p = 0.102) modifying effects of gender on the radiation dose trend, but no significant modifying effects of age at time of accident, or age at screening as modifiers of dose response (p>0.2). In summary, the relatively small contribution of unshared classical dose error in the current study results in comparatively modest effects on the regression parameters.
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Wang, Hongrui; Wang, Cheng; Wang, Ying; ...
2017-04-05
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less
On Correlated-noise Analyses Applied to Exoplanet Light Curves
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Loredo, Thomas J.; Lust, Nate B.; Blecic, Jasmina; Stemm, Madison
2017-01-01
Time-correlated noise is a significant source of uncertainty when modeling exoplanet light-curve data. A correct assessment of correlated noise is fundamental to determine the true statistical significance of our findings. Here, we review three of the most widely used correlated-noise estimators in the exoplanet field, the time-averaging, residual-permutation, and wavelet-likelihood methods. We argue that the residual-permutation method is unsound in estimating the uncertainty of parameter estimates. We thus recommend to refrain from this method altogether. We characterize the behavior of the time averaging’s rms-versus-bin-size curves at bin sizes similar to the total observation duration, which may lead to underestimated uncertainties. For the wavelet-likelihood method, we note errors in the published equations and provide a list of corrections. We further assess the performance of these techniques by injecting and retrieving eclipse signals into synthetic and real Spitzer light curves, analyzing the results in terms of the relative-accuracy and coverage-fraction statistics. Both the time-averaging and wavelet-likelihood methods significantly improve the estimate of the eclipse depth over a white-noise analysis (a Markov-chain Monte Carlo exploration assuming uncorrelated noise). However, the corrections are not perfect when retrieving the eclipse depth from Spitzer data sets, these methods covered the true (injected) depth within the 68% credible region in only ˜45%-65% of the trials. Lastly, we present our open-source model-fitting tool, Multi-Core Markov-Chain Monte Carlo (MC3). This package uses Bayesian statistics to estimate the best-fitting values and the credible regions for the parameters for a (user-provided) model. MC3 is a Python/C code, available at https://github.com/pcubillos/MCcubed.
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
Gaussianization for fast and accurate inference from cosmological data
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2016-06-01
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
NASA Astrophysics Data System (ADS)
Jennings, E.; Madigan, M.
2017-04-01
Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.
The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-03-01
Given sparse or low-quality radial velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and Markov chain Monte Carlo (MCMC) posterior sampling over the orbital parameters. Here we create a custom Monte Carlo sampler for sparse or noisy radial velocity measurements of two-body systems that can produce posterior samples for orbital parameters even when the likelihood function is poorly behaved. The six standard orbital parameters for a binary system can be split into four nonlinear parameters (period, eccentricity, argument of pericenter, phase) and two linear parameters (velocity amplitude, barycenter velocity). We capitalize on this by building a sampling method in which we densely sample the prior probability density function (pdf) in the nonlinear parameters and perform rejection sampling using a likelihood function marginalized over the linear parameters. With sparse or uninformative data, the sampling obtained by this rejection sampling is generally multimodal and dense. With informative data, the sampling becomes effectively unimodal but too sparse: in these cases we follow the rejection sampling with standard MCMC. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still informative and can be used in hierarchical (population) modeling. We give some examples that show how the posterior pdf depends sensitively on the number and time coverage of the observations and their uncertainties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, E.; Madigan, M.
Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrainmore » high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC« less
NASA Technical Reports Server (NTRS)
Bonamente, Massimillano; Joy, Marshall K.; Carlstrom, John E.; Reese, Erik D.; LaRoque, Samuel J.
2004-01-01
X-ray and Sunyaev-Zel'dovich effect data can be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from Chandra, which provides both spatial and spectral information, and Sunyaev-Zel'dovich effect data were obtained from the BIMA and Owens Valley Radio Observatory (OVRO) arrays. We introduce a Markov Chain Monte Carlo procedure for the joint analysis of X-ray and Sunyaev- Zel'dovich effect data. The advantages of this method are the high computational efficiency and the ability to measure simultaneously the probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and also for derivative quantities such as the distance to the cluster. We demonstrate this technique by applying it to the Chandra X-ray data and the OVRO radio data for the galaxy cluster A611. Comparisons with traditional likelihood ratio methods reveal the robustness of the method. This method will be used in follow-up paper to determine the distances to a large sample of galaxy cluster.
A Bayesian Alternative for Multi-objective Ecohydrological Model Specification
NASA Astrophysics Data System (ADS)
Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.
2015-12-01
Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.
Peter, Emanuel K; Pivkin, Igor V; Shea, Joan-Emma
2015-04-14
In Monte-Carlo simulations of protein folding, pathways and folding times depend on the appropriate choice of the Monte-Carlo move or process path. We developed a generalized set of process paths for a hybrid kinetic Monte Carlo-Molecular dynamics algorithm, which makes use of a novel constant time-update and allows formation of α-helical and β-stranded secondary structures. We apply our new algorithm to the folding of 3 different proteins: TrpCage, GB1, and TrpZip4. All three systems are seen to fold within the range of the experimental folding times. For the β-hairpins, we observe that loop formation is the rate-determining process followed by collapse and formation of the native core. Cluster analysis of both peptides reveals that GB1 folds with equal likelihood along a zipper or a hydrophobic collapse mechanism, while TrpZip4 follows primarily a zipper pathway. The difference observed in the folding behavior of the two proteins can be attributed to the different arrangements of their hydrophobic core, strongly packed, and dry in case of TrpZip4, and partially hydrated in the case of GB1.
Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.
Hack, C Eric
2006-04-17
Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.
cosmoabc: Likelihood-free inference for cosmology
NASA Astrophysics Data System (ADS)
Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.
2015-05-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Bayesian Framework for Water Quality Model Uncertainty Estimation and Risk Management
A formal Bayesian methodology is presented for integrated model calibration and risk-based water quality management using Bayesian Monte Carlo simulation and maximum likelihood estimation (BMCML). The primary focus is on lucid integration of model calibration with risk-based wat...
LMC: Logarithmantic Monte Carlo
NASA Astrophysics Data System (ADS)
Mantz, Adam B.
2017-06-01
LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).
Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model
NASA Astrophysics Data System (ADS)
Al Sobhi, Mashail M.
2015-02-01
Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.
Bayesian Monte Carlo and Maximum Likelihood Approach for ...
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen dynamics as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficien
Regression estimators for generic health-related quality of life and quality-adjusted life years.
Basu, Anirban; Manca, Andrea
2012-01-01
To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.
NASA Technical Reports Server (NTRS)
Pierson, W. J.
1982-01-01
The scatterometer on the National Oceanic Satellite System (NOSS) is studied by means of Monte Carlo techniques so as to determine the effect of two additional antennas for alias (or ambiguity) removal by means of an objective criteria technique and a normalized maximum likelihood estimator. Cells nominally 10 km by 10 km, 10 km by 50 km, and 50 km by 50 km are simulated for winds of 4, 8, 12 and 24 m/s and incidence angles of 29, 39, 47, and 53.5 deg for 15 deg changes in direction. The normalized maximum likelihood estimate (MLE) is correct a large part of the time, but the objective criterion technique is recommended as a reserve, and more quickly computed, procedure. Both methods for alias removal depend on the differences in the present model function at upwind and downwind. For 10 km by 10 km cells, it is found that the MLE method introduces a correlation between wind speed errors and aspect angle (wind direction) errors that can be as high as 0.8 or 0.9 and that the wind direction errors are unacceptably large, compared to those obtained for the SASS for similar assumptions.
NASA Astrophysics Data System (ADS)
Lee, Taewoong; Lee, Hyounggun; Lee, Wonho
2015-10-01
This study evaluated the use of Compton imaging technology to monitor prompt gamma rays emitted by 10B in boron neutron capture therapy (BNCT) applied to a computerized human phantom. The Monte Carlo method, including particle-tracking techniques, was used for simulation. The distribution of prompt gamma rays emitted by the phantom during irradiation with neutron beams is closely associated with the distribution of the boron in the phantom. Maximum likelihood expectation maximization (MLEM) method was applied to the information obtained from the detected prompt gamma rays to reconstruct the distribution of the tumor including the boron uptake regions (BURs). The reconstructed Compton images of the prompt gamma rays were combined with the cross-sectional images of the human phantom. Quantitative analysis of the intensity curves showed that all combined images matched the predetermined conditions of the simulation. The tumors including the BURs were distinguishable if they were more than 2 cm apart.
Analysis of capture-recapture models with individual covariates using data augmentation
Royle, J. Andrew
2009-01-01
I consider the analysis of capture-recapture models with individual covariates that influence detection probability. Bayesian analysis of the joint likelihood is carried out using a flexible data augmentation scheme that facilitates analysis by Markov chain Monte Carlo methods, and a simple and straightforward implementation in freely available software. This approach is applied to a study of meadow voles (Microtus pennsylvanicus) in which auxiliary data on a continuous covariate (body mass) are recorded, and it is thought that detection probability is related to body mass. In a second example, the model is applied to an aerial waterfowl survey in which a double-observer protocol is used. The fundamental unit of observation is the cluster of individual birds, and the size of the cluster (a discrete covariate) is used as a covariate on detection probability.
Particle identification algorithms for the PANDA Endcap Disc DIRC
NASA Astrophysics Data System (ADS)
Schmidt, M.; Ali, A.; Belias, A.; Dzhygadlo, R.; Gerhardt, A.; Götzen, K.; Kalicy, G.; Krebs, M.; Lehmann, D.; Nerling, F.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Böhm, M.; Eyrich, W.; Lehmann, A.; Pfaffinger, M.; Uhlig, F.; Düren, M.; Etzelmüller, E.; Föhl, K.; Hayrapetyan, A.; Kreutzfeld, K.; Merle, O.; Rieke, J.; Wasem, T.; Achenbach, P.; Cardinali, M.; Hoek, M.; Lauth, W.; Schlimme, S.; Sfienti, C.; Thiel, M.
2017-12-01
The Endcap Disc DIRC has been developed to provide an excellent particle identification for the future PANDA experiment by separating pions and kaons up to a momentum of 4 GeV/c with a separation power of 3 standard deviations in the polar angle region from 5o to 22o. This goal will be achieved using dedicated particle identification algorithms based on likelihood methods and will be applied in an offline analysis and online event filtering. This paper evaluates the resulting PID performance using Monte-Carlo simulations to study basic single track PID as well as the analysis of complex physics channels. The online reconstruction algorithm has been tested with a Virtex4 FGPA card and optimized regarding the resulting constraints.
Avoiding overstating the strength of forensic evidence: Shrunk likelihood ratios/Bayes factors.
Morrison, Geoffrey Stewart; Poh, Norman
2018-05-01
When strength of forensic evidence is quantified using sample data and statistical models, a concern may be raised as to whether the output of a model overestimates the strength of evidence. This is particularly the case when the amount of sample data is small, and hence sampling variability is high. This concern is related to concern about precision. This paper describes, explores, and tests three procedures which shrink the value of the likelihood ratio or Bayes factor toward the neutral value of one. The procedures are: (1) a Bayesian procedure with uninformative priors, (2) use of empirical lower and upper bounds (ELUB), and (3) a novel form of regularized logistic regression. As a benchmark, they are compared with linear discriminant analysis, and in some instances with non-regularized logistic regression. The behaviours of the procedures are explored using Monte Carlo simulated data, and tested on real data from comparisons of voice recordings, face images, and glass fragments. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Liu, Xiang; Peng, Yingwei; Tu, Dongsheng; Liang, Hua
2012-10-30
Survival data with a sizable cure fraction are commonly encountered in cancer research. The semiparametric proportional hazards cure model has been recently used to analyze such data. As seen in the analysis of data from a breast cancer study, a variable selection approach is needed to identify important factors in predicting the cure status and risk of breast cancer recurrence. However, no specific variable selection method for the cure model is available. In this paper, we present a variable selection approach with penalized likelihood for the cure model. The estimation can be implemented easily by combining the computational methods for penalized logistic regression and the penalized Cox proportional hazards models with the expectation-maximization algorithm. We illustrate the proposed approach on data from a breast cancer study. We conducted Monte Carlo simulations to evaluate the performance of the proposed method. We used and compared different penalty functions in the simulation studies. Copyright © 2012 John Wiley & Sons, Ltd.
Spectral likelihood expansions for Bayesian inference
NASA Astrophysics Data System (ADS)
Nagel, Joseph B.; Sudret, Bruno
2016-03-01
A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
Exact likelihood evaluations and foreground marginalization in low resolution WMAP data
NASA Astrophysics Data System (ADS)
Slosar, Anže; Seljak, Uroš; Makarov, Alexey
2004-06-01
The large scale anisotropies of Wilkinson Microwave Anisotropy Probe (WMAP) data have attracted a lot of attention and have been a source of controversy, with many favorite cosmological models being apparently disfavored by the power spectrum estimates at low l. All the existing analyses of theoretical models are based on approximations for the likelihood function, which are likely to be inaccurate on large scales. Here we present exact evaluations of the likelihood of the low multipoles by direct inversion of the theoretical covariance matrix for low resolution WMAP maps. We project out the unwanted galactic contaminants using the WMAP derived maps of these foregrounds. This improves over the template based foreground subtraction used in the original analysis, which can remove some of the cosmological signal and may lead to a suppression of power. As a result we find an increase in power at low multipoles. For the quadrupole the maximum likelihood values are rather uncertain and vary between 140 and 220 μK2. On the other hand, the probability distribution away from the peak is robust and, assuming a uniform prior between 0 and 2000 μK2, the probability of having the true value above 1200 μK2 (as predicted by the simplest cold dark matter model with a cosmological constant) is 10%, a factor of 2.5 higher than predicted by the WMAP likelihood code. We do not find the correlation function to be unusual beyond the low quadrupole value. We develop a fast likelihood evaluation routine that can be used instead of WMAP routines for low l values. We apply it to the Markov chain Monte Carlo analysis to compare the cosmological parameters between the two cases. The new analysis of WMAP either alone or jointly with the Sloan Digital Sky Survey (SDSS) and the Very Small Array (VSA) data reduces the evidence for running to less than 1σ, giving αs=-0.022±0.033 for the combined case. The new analysis prefers about a 1σ lower value of Ωm, a consequence of an increased integrated Sachs-Wolfe (ISW) effect contribution required by the increase in the spectrum at low l. These results suggest that the details of foreground removal and full likelihood analysis are important for parameter estimation from the WMAP data. They are robust in the sense that they do not change significantly with frequency, mask, or details of foreground template marginalization. The marginalization approach presented here is the most conservative method to remove the foregrounds and should be particularly useful in the analysis of polarization, where foreground contamination may be much more severe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniel, Scott F.; Linder, Eric V.; Lawrence Berkeley National Laboratory, Berkeley, California
Deviations from general relativity, such as could be responsible for the cosmic acceleration, would influence the growth of large-scale structure and the deflection of light by that structure. We clarify the relations between several different model-independent approaches to deviations from general relativity appearing in the literature, devising a translation table. We examine current constraints on such deviations, using weak gravitational lensing data of the CFHTLS and COSMOS surveys, cosmic microwave background radiation data of WMAP5, and supernova distance data of Union2. A Markov chain Monte Carlo likelihood analysis of the parameters over various redshift ranges yields consistency with general relativitymore » at the 95% confidence level.« less
Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model
ERIC Educational Resources Information Center
Lamsal, Sunil
2015-01-01
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
Estimation Methods for One-Parameter Testlet Models
ERIC Educational Resources Information Center
Jiao, Hong; Wang, Shudong; He, Wei
2013-01-01
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Markov Chain Monte Carlo Used in Parameter Inference of Magnetic Resonance Spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hock, Kiel; Earle, Keith
2016-02-06
In this paper, we use Boltzmann statistics and the maximum likelihood distribution derived from Bayes’ Theorem to infer parameter values for a Pake Doublet Spectrum, a lineshape of historical significance and contemporary relevance for determining distances between interacting magnetic dipoles. A Metropolis Hastings Markov Chain Monte Carlo algorithm is implemented and designed to find the optimum parameter set and to estimate parameter uncertainties. In conclusion, the posterior distribution allows us to define a metric on parameter space that induces a geometry with negative curvature that affects the parameter uncertainty estimates, particularly for spectra with low signal to noise.
Zhu, Xiang; Stephens, Matthew
2017-01-01
Bayesian methods for large-scale multiple regression provide attractive approaches to the analysis of genome-wide association studies (GWAS). For example, they can estimate heritability of complex traits, allowing for both polygenic and sparse models; and by incorporating external genomic data into the priors, they can increase power and yield new biological insights. However, these methods require access to individual genotypes and phenotypes, which are often not easily available. Here we provide a framework for performing these analyses without individual-level data. Specifically, we introduce a “Regression with Summary Statistics” (RSS) likelihood, which relates the multiple regression coefficients to univariate regression results that are often easily available. The RSS likelihood requires estimates of correlations among covariates (SNPs), which also can be obtained from public databases. We perform Bayesian multiple regression analysis by combining the RSS likelihood with previously proposed prior distributions, sampling posteriors by Markov chain Monte Carlo. In a wide range of simulations RSS performs similarly to analyses using the individual data, both for estimating heritability and detecting associations. We apply RSS to a GWAS of human height that contains 253,288 individuals typed at 1.06 million SNPs, for which analyses of individual-level data are practically impossible. Estimates of heritability (52%) are consistent with, but more precise, than previous results using subsets of these data. We also identify many previously unreported loci that show evidence for association with height in our analyses. Software is available at https://github.com/stephenslab/rss. PMID:29399241
Extreme data compression for the CMB
NASA Astrophysics Data System (ADS)
Zablocki, Alan; Dodelson, Scott
2016-04-01
We apply the Karhunen-Loéve methods to cosmic microwave background (CMB) data sets, and show that we can recover the input cosmology and obtain the marginalized likelihoods in Λ cold dark matter cosmologies in under a minute, much faster than Markov chain Monte Carlo methods. This is achieved by forming a linear combination of the power spectra at each multipole l , and solving a system of simultaneous equations such that the Fisher matrix is locally unchanged. Instead of carrying out a full likelihood evaluation over the whole parameter space, we need evaluate the likelihood only for the parameter of interest, with the data compression effectively marginalizing over all other parameters. The weighting vectors contain insight about the physical effects of the parameters on the CMB anisotropy power spectrum Cl . The shape and amplitude of these vectors give an intuitive feel for the physics of the CMB, the sensitivity of the observed spectrum to cosmological parameters, and the relative sensitivity of different experiments to cosmological parameters. We test this method on exact theory Cl as well as on a Wilkinson Microwave Anisotropy Probe (WMAP)-like CMB data set generated from a random realization of a fiducial cosmology, comparing the compression results to those from a full likelihood analysis using CosmoMC. After showing that the method works, we apply it to the temperature power spectrum from the WMAP seven-year data release, and discuss the successes and limitations of our method as applied to a real data set.
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walter, Thorsten
2005-06-17
In this thesis two searches for electroweak single top quark production with the CDF experiment have been presented, a cutbased search and an iterated discriminant analysis. Both searches find no significant evidence for electroweak single top production using a data set corresponding to an integrated luminosity of 162 pb -1 collected with CDF. Therefore limits on s- and t-channel single top production are determined using a likelihood technique. For the cutbased search a likelihood function based on lepton charge times pseudorapidity of the non-bottom jet was used if exactly one bottom jet was identified in the event. In case ofmore » two identified bottom jets a likelihood function based on the total number of observed events was used. The systematic uncertainties have been treated in a Bayesian approach, all sources of systematic uncertainties have been integrated out. An improved signal modeling using the MadEvent Monte Carlo program matched to NLO calculations has been used. The obtained limits for the s- and t-channel single top production cross sections are 13.6 pb and 10.1 pb, respectively. To date, these are most stringent limits published for the s- and the t-channel single top quark production modes.« less
Maximum likelihood estimation for life distributions with competing failure modes
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1979-01-01
Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.
Bayesian experimental design for models with intractable likelihoods.
Drovandi, Christopher C; Pettitt, Anthony N
2013-12-01
In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dandini, Vincent John; Duran, Felicia Angelica; Wyss, Gregory Dane
2003-09-01
This article describes how features of event tree analysis and Monte Carlo-based discrete event simulation can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology, with some of the best features of each. The resultant object-based event scenario tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible. Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST methodology is then applied to anmore » aviation safety problem that considers mechanisms by which an aircraft might become involved in a runway incursion incident. The resulting OBEST model demonstrates how a close link between human reliability analysis and probabilistic risk assessment methods can provide important insights into aviation safety phenomenology.« less
NASA Astrophysics Data System (ADS)
Yoon, Ilsang; Weinberg, Martin D.; Katz, Neal
2011-06-01
We introduce a new galaxy image decomposition tool, GALPHAT (GALaxy PHotometric ATtributes), which is a front-end application of the Bayesian Inference Engine (BIE), a parallel Markov chain Monte Carlo package, to provide full posterior probability distributions and reliable confidence intervals for all model parameters. The BIE relies on GALPHAT to compute the likelihood function. GALPHAT generates scale-free cumulative image tables for the desired model family with precise error control. Interpolation of this table yields accurate pixellated images with any centre, scale and inclination angle. GALPHAT then rotates the image by position angle using a Fourier shift theorem, yielding high-speed, accurate likelihood computation. We benchmark this approach using an ensemble of simulated Sérsic model galaxies over a wide range of observational conditions: the signal-to-noise ratio S/N, the ratio of galaxy size to the point spread function (PSF) and the image size, and errors in the assumed PSF; and a range of structural parameters: the half-light radius re and the Sérsic index n. We characterize the strength of parameter covariance in the Sérsic model, which increases with S/N and n, and the results strongly motivate the need for the full posterior probability distribution in galaxy morphology analyses and later inferences. The test results for simulated galaxies successfully demonstrate that, with a careful choice of Markov chain Monte Carlo algorithms and fast model image generation, GALPHAT is a powerful analysis tool for reliably inferring morphological parameters from a large ensemble of galaxies over a wide range of different observational conditions.
Effects of Missing Data Methods in Structural Equation Modeling with Nonnormal Longitudinal Data
ERIC Educational Resources Information Center
Shin, Tacksoo; Davison, Mark L.; Long, Jeffrey D.
2009-01-01
The purpose of this study is to investigate the effects of missing data techniques in longitudinal studies under diverse conditions. A Monte Carlo simulation examined the performance of 3 missing data methods in latent growth modeling: listwise deletion (LD), maximum likelihood estimation using the expectation and maximization algorithm with a…
Monte Carlo Simulation to Estimate Likelihood of Direct Lightning Strikes
NASA Technical Reports Server (NTRS)
Mata, Carlos; Medelius, Pedro
2008-01-01
A software tool has been designed to quantify the lightning exposure at launch sites of the stack at the pads under different configurations. In order to predict lightning strikes to generic structures, this model uses leaders whose origins (in the x-y plane) are obtained from a 2D random, normal distribution.
NASA Astrophysics Data System (ADS)
Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.
2014-12-01
This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.
NASA Astrophysics Data System (ADS)
Sadegh, M.; Vrugt, J. A.
2013-12-01
The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at increasingly finer spatial and temporal scales. Reconciling these system models with field and remote sensing data is a difficult task, particularly because average measures of model/data similarity inherently lack the power to provide a meaningful comparative evaluation of the consistency in model form and function. The very construction of the likelihood function - as a summary variable of the (usually averaged) properties of the error residuals - dilutes and mixes the available information into an index having little remaining correspondence to specific behaviors of the system (Gupta et al., 2008). The quest for a more powerful method for model evaluation has inspired Vrugt and Sadegh [2013] to introduce "likelihood-free" inference as vehicle for diagnostic model evaluation. This class of methods is also referred to as Approximate Bayesian Computation (ABC) and relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a much stronger and compelling diagnostic power than some aggregated measure of the size of the error residuals. Here, we will introduce an efficient ABC sampling method that is orders of magnitude faster in exploring the posterior parameter distribution than commonly used rejection and Population Monte Carlo (PMC) samplers. Our methodology uses Markov Chain Monte Carlo simulation with DREAM, and takes advantage of a simple computational trick to resolve discontinuity problems with the application of set-theoretic summary statistics. We will also demonstrate a set of summary statistics that are rather insensitive to errors in the forcing data. This enhances prospects of detecting model structural deficiencies.
Models and analysis for multivariate failure time data
NASA Astrophysics Data System (ADS)
Shih, Joanna Huang
The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.
NASA Astrophysics Data System (ADS)
Degaudenzi, Riccardo; Vanghi, Vieri
1994-02-01
In all-digital Trellis-Coded 8PSK (TC-8PSK) demodulator well suited for VLSI implementation, including maximum likelihood estimation decision-directed (MLE-DD) carrier phase and clock timing recovery, is introduced and analyzed. By simply removing the trellis decoder the demodulator can efficiently cope with uncoded 8PSK signals. The proposed MLE-DD synchronization algorithm requires one sample for the phase and two samples per symbol for the timing loop. The joint phase and timing discriminator characteristics are analytically derived and numerical results checked by means of computer simulations. An approximated expression for steady-state carrier phase and clock timing mean square error has been derived and successfully checked with simulation findings. Synchronizer deviation from the Cramer Rao bound is also discussed. Mean acquisition time for the digital synchronizer has also been computed and checked, using the Monte Carlo simulation technique. Finally, TC-8PSK digital demodulator performance in terms of bit error rate and mean time to lose lock, including digital interpolators and synchronization loops, is presented.
EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.
Tong, Xiaoxiao; Bentler, Peter M
2013-01-01
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.
Quantifying (dis)agreement between direct detection experiments in a halo-independent way
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feldstein, Brian; Kahlhoefer, Felix, E-mail: brian.feldstein@physics.ox.ac.uk, E-mail: felix.kahlhoefer@physics.ox.ac.uk
We propose an improved method to study recent and near-future dark matter direct detection experiments with small numbers of observed events. Our method determines in a quantitative and halo-independent way whether the experiments point towards a consistent dark matter signal and identifies the best-fit dark matter parameters. To achieve true halo independence, we apply a recently developed method based on finding the velocity distribution that best describes a given set of data. For a quantitative global analysis we construct a likelihood function suitable for small numbers of events, which allows us to determine the best-fit particle physics properties of darkmore » matter considering all experiments simultaneously. Based on this likelihood function we propose a new test statistic that quantifies how well the proposed model fits the data and how large the tension between different direct detection experiments is. We perform Monte Carlo simulations in order to determine the probability distribution function of this test statistic and to calculate the p-value for both the dark matter hypothesis and the background-only hypothesis.« less
Parsons, Tom
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Parsons, T.
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
On analyzing ordinal data when responses and covariates are both missing at random.
Rana, Subrata; Roy, Surupa; Das, Kalyan
2016-08-01
In many occasions, particularly in biomedical studies, data are unavailable for some responses and covariates. This leads to biased inference in the analysis when a substantial proportion of responses or a covariate or both are missing. Except a few situations, methods for missing data have earlier been considered either for missing response or for missing covariates, but comparatively little attention has been directed to account for both missing responses and missing covariates, which is partly attributable to complexity in modeling and computation. This seems to be important as the precise impact of substantial missing data depends on the association between two missing data processes as well. The real difficulty arises when the responses are ordinal by nature. We develop a joint model to take into account simultaneously the association between the ordinal response variable and covariates and also that between the missing data indicators. Such a complex model has been analyzed here by using the Markov chain Monte Carlo approach and also by the Monte Carlo relative likelihood approach. Their performance on estimating the model parameters in finite samples have been looked into. We illustrate the application of these two methods using data from an orthodontic study. Analysis of such data provides some interesting information on human habit. © The Author(s) 2013.
ERIC Educational Resources Information Center
Gold, Michael S.; Bentler, Peter M.; Kim, Kevin H.
2003-01-01
This article describes a Monte Carlo study of 2 methods for treating incomplete nonnormal data. Skewed, kurtotic data sets conforming to a single structured model, but varying in sample size, percentage of data missing, and missing-data mechanism, were produced. An asymptotically distribution-free available-case (ADFAC) method and structured-model…
Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model
ERIC Educational Resources Information Center
Roberts, James S.; Thompson, Vanessa M.
2011-01-01
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
ERIC Educational Resources Information Center
Martin-Fernandez, Manuel; Revuelta, Javier
2017-01-01
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
ERIC Educational Resources Information Center
Lee, Soo; Suh, Youngsuk
2018-01-01
Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…
Extreme data compression for the CMB
Zablocki, Alan; Dodelson, Scott
2016-04-28
We apply the Karhunen-Loéve methods to cosmic microwave background (CMB) data sets, and show that we can recover the input cosmology and obtain the marginalized likelihoods in Λ cold dark matter cosmologies in under a minute, much faster than Markov chain Monte Carlo methods. This is achieved by forming a linear combination of the power spectra at each multipole l, and solving a system of simultaneous equations such that the Fisher matrix is locally unchanged. Instead of carrying out a full likelihood evaluation over the whole parameter space, we need evaluate the likelihood only for the parameter of interest, with themore » data compression effectively marginalizing over all other parameters. The weighting vectors contain insight about the physical effects of the parameters on the CMB anisotropy power spectrum C l. The shape and amplitude of these vectors give an intuitive feel for the physics of the CMB, the sensitivity of the observed spectrum to cosmological parameters, and the relative sensitivity of different experiments to cosmological parameters. We test this method on exact theory C l as well as on a Wilkinson Microwave Anisotropy Probe (WMAP)-like CMB data set generated from a random realization of a fiducial cosmology, comparing the compression results to those from a full likelihood analysis using CosmoMC. Furthermore, after showing that the method works, we apply it to the temperature power spectrum from the WMAP seven-year data release, and discuss the successes and limitations of our method as applied to a real data set.« less
Kim, Hyun Suk; Choi, Hong Yeop; Lee, Gyemin; Ye, Sung-Joon; Smith, Martin B; Kim, Geehyun
2018-03-01
The aim of this work is to develop a gamma-ray/neutron dual-particle imager, based on rotational modulation collimators (RMCs) and pulse shape discrimination (PSD)-capable scintillators, for possible applications for radioactivity monitoring as well as nuclear security and safeguards. A Monte Carlo simulation study was performed to design an RMC system for the dual-particle imaging, and modulation patterns were obtained for gamma-ray and neutron sources in various configurations. We applied an image reconstruction algorithm utilizing the maximum-likelihood expectation-maximization method based on the analytical modeling of source-detector configurations, to the Monte Carlo simulation results. Both gamma-ray and neutron source distributions were reconstructed and evaluated in terms of signal-to-noise ratio, showing the viability of developing an RMC-based gamma-ray/neutron dual-particle imager using PSD-capable scintillators.
Many-body optimization using an ab initio monte carlo method.
Haubein, Ned C; McMillan, Scott A; Broadbelt, Linda J
2003-01-01
Advances in computing power have made it possible to study solvated molecules using ab initio quantum chemistry. Inclusion of discrete solvent molecules is required to determine geometric information about solute/solvent clusters. Monte Carlo methods are well suited to finding minima in many-body systems, and ab initio methods are applicable to the widest range of systems. A first principles Monte Carlo (FPMC) method was developed to find minima in many-body systems, and emphasis was placed on implementing moves that increase the likelihood of finding minimum energy structures. Partial optimization and molecular interchange moves aid in finding minima and overcome the incomplete sampling that is unavoidable when using ab initio methods. FPMC was validated by studying the boron trifluoride-water system, and then the method was used to examine the methyl carbenium ion in water to demonstrate its application to solvation problems.
Two new methods to fit models for network meta-analysis with random inconsistency effects.
Law, Martin; Jackson, Dan; Turner, Rebecca; Rhodes, Kirsty; Viechtbauer, Wolfgang
2016-07-28
Meta-analysis is a valuable tool for combining evidence from multiple studies. Network meta-analysis is becoming more widely used as a means to compare multiple treatments in the same analysis. However, a network meta-analysis may exhibit inconsistency, whereby the treatment effect estimates do not agree across all trial designs, even after taking between-study heterogeneity into account. We propose two new estimation methods for network meta-analysis models with random inconsistency effects. The model we consider is an extension of the conventional random-effects model for meta-analysis to the network meta-analysis setting and allows for potential inconsistency using random inconsistency effects. Our first new estimation method uses a Bayesian framework with empirically-based prior distributions for both the heterogeneity and the inconsistency variances. We fit the model using importance sampling and thereby avoid some of the difficulties that might be associated with using Markov Chain Monte Carlo (MCMC). However, we confirm the accuracy of our importance sampling method by comparing the results to those obtained using MCMC as the gold standard. The second new estimation method we describe uses a likelihood-based approach, implemented in the metafor package, which can be used to obtain (restricted) maximum-likelihood estimates of the model parameters and profile likelihood confidence intervals of the variance components. We illustrate the application of the methods using two contrasting examples. The first uses all-cause mortality as an outcome, and shows little evidence of between-study heterogeneity or inconsistency. The second uses "ear discharge" as an outcome, and exhibits substantial between-study heterogeneity and inconsistency. Both new estimation methods give results similar to those obtained using MCMC. The extent of heterogeneity and inconsistency should be assessed and reported in any network meta-analysis. Our two new methods can be used to fit models for network meta-analysis with random inconsistency effects. They are easily implemented using the accompanying R code in the Additional file 1. Using these estimation methods, the extent of inconsistency can be assessed and reported.
Bayesian Analysis of Biogeography when the Number of Areas is Large
Landis, Michael J.; Matzke, Nicholas J.; Moore, Brian R.; Huelsenbeck, John P.
2013-01-01
Historical biogeography is increasingly studied from an explicitly statistical perspective, using stochastic models to describe the evolution of species range as a continuous-time Markov process of dispersal between and extinction within a set of discrete geographic areas. The main constraint of these methods is the computational limit on the number of areas that can be specified. We propose a Bayesian approach for inferring biogeographic history that extends the application of biogeographic models to the analysis of more realistic problems that involve a large number of areas. Our solution is based on a “data-augmentation” approach, in which we first populate the tree with a history of biogeographic events that is consistent with the observed species ranges at the tips of the tree. We then calculate the likelihood of a given history by adopting a mechanistic interpretation of the instantaneous-rate matrix, which specifies both the exponential waiting times between biogeographic events and the relative probabilities of each biogeographic change. We develop this approach in a Bayesian framework, marginalizing over all possible biogeographic histories using Markov chain Monte Carlo (MCMC). Besides dramatically increasing the number of areas that can be accommodated in a biogeographic analysis, our method allows the parameters of a given biogeographic model to be estimated and different biogeographic models to be objectively compared. Our approach is implemented in the program, BayArea. [ancestral area analysis; Bayesian biogeographic inference; data augmentation; historical biogeography; Markov chain Monte Carlo.] PMID:23736102
Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France
2016-10-01
Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Cosmological parameter estimation using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
Krishnamoorthy, K; Oral, Evrim
2017-12-01
Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2016-12-01
Bayesian multimodel inference is increasingly being used in hydrology. Estimating Bayesian model evidence (BME) is of central importance in many Bayesian multimodel analysis such as Bayesian model averaging and model selection. BME is the overall probability of the model in reproducing the data, accounting for the trade-off between the goodness-of-fit and the model complexity. Yet estimating BME is challenging, especially for high dimensional problems with complex sampling space. Estimating BME using the Monte Carlo numerical methods is preferred, as the methods yield higher accuracy than semi-analytical solutions (e.g. Laplace approximations, BIC, KIC, etc.). However, numerical methods are prone the numerical demons arising from underflow of round off errors. Although few studies alluded to this issue, to our knowledge this is the first study that illustrates these numerical demons. We show that the precision arithmetic can become a threshold on likelihood values and Metropolis acceptance ratio, which results in trimming parameter regions (when likelihood function is less than the smallest floating point number that a computer can represent) and corrupting of the empirical measures of the random states of the MCMC sampler (when using log-likelihood function). We consider two of the most powerful numerical estimators of BME that are the path sampling method of thermodynamic integration (TI) and the importance sampling method of steppingstone sampling (SS). We also consider the two most widely used numerical estimators, which are the prior sampling arithmetic mean (AS) and posterior sampling harmonic mean (HM). We investigate the vulnerability of these four estimators to the numerical demons. Interesting, the most biased estimator, namely the HM, turned out to be the least vulnerable. While it is generally assumed that AM is a bias-free estimator that will always approximate the true BME by investing in computational effort, we show that arithmetic underflow can hamper AM resulting in severe underestimation of BME. TI turned out to be the most vulnerable, resulting in BME overestimation. Finally, we show how SS can be largely invariant to rounding errors, yielding the most accurate and computational efficient results. These research results are useful for MC simulations to estimate Bayesian model evidence.
Grummer, Jared A; Bryson, Robert W; Reeder, Tod W
2014-03-01
Current molecular methods of species delimitation are limited by the types of species delimitation models and scenarios that can be tested. Bayes factors allow for more flexibility in testing non-nested species delimitation models and hypotheses of individual assignment to alternative lineages. Here, we examined the efficacy of Bayes factors in delimiting species through simulations and empirical data from the Sceloporus scalaris species group. Marginal-likelihood scores of competing species delimitation models, from which Bayes factor values were compared, were estimated with four different methods: harmonic mean estimation (HME), smoothed harmonic mean estimation (sHME), path-sampling/thermodynamic integration (PS), and stepping-stone (SS) analysis. We also performed model selection using a posterior simulation-based analog of the Akaike information criterion through Markov chain Monte Carlo analysis (AICM). Bayes factor species delimitation results from the empirical data were then compared with results from the reversible-jump MCMC (rjMCMC) coalescent-based species delimitation method Bayesian Phylogenetics and Phylogeography (BP&P). Simulation results show that HME and sHME perform poorly compared with PS and SS marginal-likelihood estimators when identifying the true species delimitation model. Furthermore, Bayes factor delimitation (BFD) of species showed improved performance when species limits are tested by reassigning individuals between species, as opposed to either lumping or splitting lineages. In the empirical data, BFD through PS and SS analyses, as well as the rjMCMC method, each provide support for the recognition of all scalaris group taxa as independent evolutionary lineages. Bayes factor species delimitation and BP&P also support the recognition of three previously undescribed lineages. In both simulated and empirical data sets, harmonic and smoothed harmonic mean marginal-likelihood estimators provided much higher marginal-likelihood estimates than PS and SS estimators. The AICM displayed poor repeatability in both simulated and empirical data sets, and produced inconsistent model rankings across replicate runs with the empirical data. Our results suggest that species delimitation through the use of Bayes factors with marginal-likelihood estimates via PS or SS analyses provide a useful and complementary alternative to existing species delimitation methods.
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Pierson, W. J., Jr.
1984-01-01
Backscatter measurements at upwind and crosswind are simulated for five incidence angles by means of the SASS-1 model function. The effects of communication noise and attitude errors are simulated by Monte Carlo methods, and the winds are recovered by both the Sum of Square (SOS) algorithm and a Maximum Likelihood Estimater (MLE). The SOS algorithm is shown to fail for light enough winds at all incidence angles and to fail to show areas of calm because backscatter estimates that were negative or that produced incorrect values of K sub p greater than one were discarded. The MLE performs well for all input backscatter estimates and returns calm when both are negative. The use of the SOS algorithm is shown to have introduced errors in the SASS-1 model function that, in part, cancel out the errors that result from using it, but that also cause disagreement with other data sources such as the AAFE circle flight data at light winds. Implications for future scatterometer systems are given.
Light-quark and gluon jet discrimination in pp collisions at √s = 7 TeV with the ATLAS detector
Aad, G.; Abbott, B.; Abdallah, J.; ...
2014-08-21
A likelihood-based discriminant for the identification of quark- and gluon-initiated jets is built and validated using 4.7 fb -1 of proton–proton collision data at √s = 7 TeV collected with the ATLAS detector at the LHC. Data samples with enriched quark or gluon content are used in the construction and validation of templates of jet properties that are the input to the likelihood-based discriminant. The discriminating power of the jet tagger is established in both data and Monte Carlo samples within a systematic uncertainty of ≈ 10–20 %. In data, light-quark jets can be tagged with an efficiency of ≈more » 50% while achieving a gluon-jet mis-tag rate of ≈ 25% in a p T range between 40 GeV and 360 GeV for jets in the acceptance of the tracker. The rejection of gluon-jets found in the data is significantly below what is attainable using a Pythia 6 Monte Carlo simulation, where gluon-jet mis-tag rates of 10 % can be reached for a 50 % selection efficiency of light-quark jets using the same jet properties.« less
Teleradiology from the provider's perspective-cost analysis for a mid-size university hospital.
Rosenberg, Christian; Kroos, Kristin; Rosenberg, Britta; Hosten, Norbert; Flessa, Steffen
2013-08-01
Real costs of teleradiology services have not been systematically calculated. Pricing policies are not evidence-based. This study aims to prove the feasibility of performing an original cost analysis for teleradiology services and show break-even points to perform cost-effective practice. Based on the teleradiology services provided by the Greifswald University Hospital in northeastern Germany, a detailed process analysis and an activity-based costing model revealed costs per service unit according to eight examination categories. The Monte Carlo method was used to simulate the cost amplitude and identify pricing thresholds. Twenty-two sub-processes and four staff categories were identified. The average working time for one unit was 55 (x-ray) to 72 min (whole-body CT). Personnel costs were dominant (up to 68 %), representing lower limit costs. The Monte Carlo method showed the cost distribution per category according to the deficiency risk. Avoiding deficient pricing by a likelihood of 90 % increased the cost of a cranial CT almost twofold as compared with the lower limit cost. Original cost analysis is possible when providing teleradiology services with complex statutory requirements in place. Methodology and results provide useful data to help enhance efficiency in hospital management as well as implement realistic reimbursement fees. • Analysis of original costs of teleradiology is possible for a providing hospital • Results discriminate pricing thresholds and lower limit costs to perform cost-effective practice • The study methods represent a managing tool to enhance efficiency in providing facilities • The data are useful to help represent telemedicine services in regular medical fee schedules.
Bayesian Computation for Log-Gaussian Cox Processes: A Comparative Analysis of Methods
Teng, Ming; Nathoo, Farouk S.; Johnson, Timothy D.
2017-01-01
The Log-Gaussian Cox Process is a commonly used model for the analysis of spatial point pattern data. Fitting this model is difficult because of its doubly-stochastic property, i.e., it is an hierarchical combination of a Poisson process at the first level and a Gaussian Process at the second level. Various methods have been proposed to estimate such a process, including traditional likelihood-based approaches as well as Bayesian methods. We focus here on Bayesian methods and several approaches that have been considered for model fitting within this framework, including Hamiltonian Monte Carlo, the Integrated nested Laplace approximation, and Variational Bayes. We consider these approaches and make comparisons with respect to statistical and computational efficiency. These comparisons are made through several simulation studies as well as through two applications, the first examining ecological data and the second involving neuroimaging data. PMID:29200537
SPOTting model parameters using a ready-made Python package
NASA Astrophysics Data System (ADS)
Houska, Tobias; Kraft, Philipp; Breuer, Lutz
2015-04-01
The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for optimization methods. Here we see simple algorithms like the MCMC struggling to find the global optimum of the function, while algorithms like SCE-UA and DE-MCZ show their strengths. Thirdly, we apply an uncertainty analysis of a one-dimensional physically based hydrological model build with the Catchment Modelling Framework (CMF). The model is driven by meteorological and groundwater data from a Free Air Carbon Enrichment (FACE) experiment in Linden (Hesse, Germany). Simulation results are evaluated with measured soil moisture data. We search for optimal parameter sets of the van Genuchten-Mualem function and find different equally optimal solutions with some of the algorithms. The case studies reveal that the implemented SPOT methods work sufficiently well. They further show the benefit of having one tool at hand that includes a number of parameter search methods, likelihood functions and a priori parameter distributions within one platform independent package.
Stan : A Probabilistic Programming Language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
Stan : A Probabilistic Programming Language
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...
2017-01-01
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, B; Georgia Institute of Technology, Atlanta, GA; Wang, C
Purpose: To correlate the damage produced by particles of different types and qualities to cell survival on the basis of nanodosimetric analysis and advanced DNA structures in the cell nucleus. Methods: A Monte Carlo code was developed to simulate subnuclear DNA chromatin fibers (CFs) of 30nm utilizing a mean-free-path approach common to radiation transport. The cell nucleus was modeled as a spherical region containing 6000 chromatin-dense domains (CDs) of 400nm diameter, with additional CFs modeled in a sparser interchromatin region. The Geant4-DNA code was utilized to produce a particle track database representing various particles at different energies and dose quantities.more » These tracks were used to stochastically position the DNA structures based on their mean free path to interaction with CFs. Excitation and ionization events intersecting CFs were analyzed using the DBSCAN clustering algorithm for assessment of the likelihood of producing DSBs. Simulated DSBs were then assessed based on their proximity to one another for a probability of inducing cell death. Results: Variations in energy deposition to chromatin fibers match expectations based on differences in particle track structure. The quality of damage to CFs based on different particle types indicate more severe damage by high-LET radiation than low-LET radiation of identical particles. In addition, the model indicates more severe damage by protons than of alpha particles of same LET, which is consistent with differences in their track structure. Cell survival curves have been produced showing the L-Q behavior of sparsely ionizing radiation. Conclusion: Initial results indicate the feasibility of producing cell survival curves based on the Monte Carlo cell nucleus method. Accurate correlation between simulated DNA damage to cell survival on the basis of nanodosimetric analysis can provide insight into the biological responses to various radiation types. Current efforts are directed at producing cell survival curves for high-LET radiation.« less
NASA Astrophysics Data System (ADS)
Sadegh, Mojtaba; Ragno, Elisa; AghaKouchak, Amir
2017-06-01
We present a newly developed Multivariate Copula Analysis Toolbox (MvCAT) which includes a wide range of copula families with different levels of complexity. MvCAT employs a Bayesian framework with a residual-based Gaussian likelihood function for inferring copula parameters and estimating the underlying uncertainties. The contribution of this paper is threefold: (a) providing a Bayesian framework to approximate the predictive uncertainties of fitted copulas, (b) introducing a hybrid-evolution Markov Chain Monte Carlo (MCMC) approach designed for numerical estimation of the posterior distribution of copula parameters, and (c) enabling the community to explore a wide range of copulas and evaluate them relative to the fitting uncertainties. We show that the commonly used local optimization methods for copula parameter estimation often get trapped in local minima. The proposed method, however, addresses this limitation and improves describing the dependence structure. MvCAT also enables evaluation of uncertainties relative to the length of record, which is fundamental to a wide range of applications such as multivariate frequency analysis.
Chen, Shuhang; Liu, Huafeng; Shi, Pengcheng; Chen, Yunmei
2015-01-21
Accurate and robust reconstruction of the radioactivity concentration is of great importance in positron emission tomography (PET) imaging. Given the Poisson nature of photo-counting measurements, we present a reconstruction framework that integrates sparsity penalty on a dictionary into a maximum likelihood estimator. Patch-sparsity on a dictionary provides the regularization for our effort, and iterative procedures are used to solve the maximum likelihood function formulated on Poisson statistics. Specifically, in our formulation, a dictionary could be trained on CT images, to provide intrinsic anatomical structures for the reconstructed images, or adaptively learned from the noisy measurements of PET. Accuracy of the strategy with very promising application results from Monte-Carlo simulations, and real data are demonstrated.
What are hierarchical models and how do we analyze them?
Royle, Andy
2016-01-01
In this chapter we provide a basic definition of hierarchical models and introduce the two canonical hierarchical models in this book: site occupancy and N-mixture models. The former is a hierarchical extension of logistic regression and the latter is a hierarchical extension of Poisson regression. We introduce basic concepts of probability modeling and statistical inference including likelihood and Bayesian perspectives. We go through the mechanics of maximizing the likelihood and characterizing the posterior distribution by Markov chain Monte Carlo (MCMC) methods. We give a general perspective on topics such as model selection and assessment of model fit, although we demonstrate these topics in practice in later chapters (especially Chapters 5, 6, 7, and 10 Chapter 5 Chapter 6 Chapter 7 Chapter 10)
NASA Astrophysics Data System (ADS)
Heavens, A. F.; Seikel, M.; Nord, B. D.; Aich, M.; Bouffanais, Y.; Bassett, B. A.; Hobson, M. P.
2014-12-01
The Fisher Information Matrix formalism (Fisher 1935) is extended to cases where the data are divided into two parts (X, Y), where the expectation value of Y depends on X according to some theoretical model, and X and Y both have errors with arbitrary covariance. In the simplest case, (X, Y) represent data pairs of abscissa and ordinate, in which case the analysis deals with the case of data pairs with errors in both coordinates, but X can be any measured quantities on which Y depends. The analysis applies for arbitrary covariance, provided all errors are Gaussian, and provided the errors in X are small, both in comparison with the scale over which the expected signal Y changes, and with the width of the prior distribution. This generalizes the Fisher Matrix approach, which normally only considers errors in the `ordinate' Y. In this work, we include errors in X by marginalizing over latent variables, effectively employing a Bayesian hierarchical model, and deriving the Fisher Matrix for this more general case. The methods here also extend to likelihood surfaces which are not Gaussian in the parameter space, and so techniques such as DALI (Derivative Approximation for Likelihoods) can be generalized straightforwardly to include arbitrary Gaussian data error covariances. For simple mock data and theoretical models, we compare to Markov Chain Monte Carlo experiments, illustrating the method with cosmological supernova data. We also include the new method in the FISHER4CAST software.
Piecewise exponential survival times and analysis of case-cohort data.
Li, Yan; Gail, Mitchell H; Preston, Dale L; Graubard, Barry I; Lubin, Jay H
2012-06-15
Case-cohort designs select a random sample of a cohort to be used as control with cases arising from the follow-up of the cohort. Analyses of case-cohort studies with time-varying exposures that use Cox partial likelihood methods can be computer intensive. We propose a piecewise-exponential approach where Poisson regression model parameters are estimated from a pseudolikelihood and the corresponding variances are derived by applying Taylor linearization methods that are used in survey research. The proposed approach is evaluated using Monte Carlo simulations. An illustration is provided using data from the Alpha-Tocopherol, Beta-Carotene Cancer Prevention Study of male smokers in Finland, where a case-cohort study of serum glucose level and pancreatic cancer was analyzed. Copyright © 2012 John Wiley & Sons, Ltd.
Applying Multivariate Discrete Distributions to Genetically Informative Count Data.
Kirkpatrick, Robert M; Neale, Michael C
2016-03-01
We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.
Flood Frequency Analysis With Historical and Paleoflood Information
NASA Astrophysics Data System (ADS)
Stedinger, Jery R.; Cohn, Timothy A.
1986-05-01
An investigation is made of flood quantile estimators which can employ "historical" and paleoflood information in flood frequency analyses. Two categories of historical information are considered: "censored" data, where the magnitudes of historical flood peaks are known; and "binomial" data, where only threshold exceedance information is available. A Monte Carlo study employing the two-parameter lognormal distribution shows that maximum likelihood estimators (MLEs) can extract the equivalent of an additional 10-30 years of gage record from a 50-year period of historical observation. The MLE routines are shown to be substantially better than an adjusted-moment estimator similar to the one recommended in Bulletin 17B of the United States Water Resources Council Hydrology Committee (1982). The MLE methods performed well even when floods were drawn from other than the assumed lognormal distribution.
Reyes-Valdés, M H; Stelly, D M
1995-01-01
Frequencies of meiotic configurations in cytogenetic stocks are dependent on chiasma frequencies in segments defined by centromeres, breakpoints, and telomeres. The expectation maximization algorithm is proposed as a general method to perform maximum likelihood estimations of the chiasma frequencies in the intervals between such locations. The estimates can be translated via mapping functions into genetic maps of cytogenetic landmarks. One set of observational data was analyzed to exemplify application of these methods, results of which were largely concordant with other comparable data. The method was also tested by Monte Carlo simulation of frequencies of meiotic configurations from a monotelodisomic translocation heterozygote, assuming six different sample sizes. The estimate averages were always close to the values given initially to the parameters. The maximum likelihood estimation procedures can be extended readily to other kinds of cytogenetic stocks and allow the pooling of diverse cytogenetic data to collectively estimate lengths of segments, arms, and chromosomes. Images Fig. 1 PMID:7568226
Precision Parameter Estimation and Machine Learning
NASA Astrophysics Data System (ADS)
Wandelt, Benjamin D.
2008-12-01
I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.
Aad, G; Abbott, B; Abdallah, J; Abdel Khalek, S; Abdinov, O; Aben, R; Abi, B; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Agatonovic-Jovin, T; Aguilar-Saavedra, J A; Agustoni, M; Ahlen, S P; Ahmad, A; Ahmadov, F; Aielli, G; Åkesson, T P A; Akimoto, G; Akimov, A V; Albert, J; Albrand, S; Alconada Verzini, M J; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Alimonti, G; Alio, L; Alison, J; Allbrooke, B M M; Allison, L J; Allport, P P; Allwood-Spiers, S E; Almond, J; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Altheimer, A; Alvarez Gonzalez, B; Alviggi, M G; Amako, K; Amaral Coutinho, Y; Amelung, C; Amidei, D; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anderson, K J; Andreazza, A; Andrei, V; Anduaga, X S; Angelidakis, S; Angelozzi, I; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antonaki, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Apolle, R; Arabidze, G; Aracena, I; Arai, Y; Araque, J P; Arce, A T H; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Arnold, H; Arslan, O; Artamonov, A; Artoni, G; Asai, S; Asbah, N; Ashkenazi, A; Ask, S; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Auerbach, B; Augsten, K; Aurousseau, M; Avolio, G; Azuelos, G; Azuma, Y; Baak, M A; Bacci, C; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Backus Mayes, J; Badescu, E; Bagiacchi, P; Bagnaia, P; Bai, Y; Bain, T; Baines, J T; Baker, O K; Baker, S; Balek, P; Balli, F; Banas, E; Banerjee, Sw; Banfi, D; Bangert, A; Bannoura, A A E; Bansal, V; Bansil, H S; Barak, L; Baranov, S P; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Barnovska, Z; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartos, P; Bartsch, V; Bassalat, A; Basye, A; Bates, R L; Batkova, L; Batley, J R; Battistin, M; Bauer, F; Bawa, H S; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, S; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bedikian, S; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, K; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belloni, A; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Bensinger, J R; Benslama, K; Bentvelsen, S; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Berglund, E; Beringer, J; Bernard, C; Bernat, P; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertolucci, F; Besana, M I; Besjes, G J; Bessidskaia, O; Besson, N; Betancourt, C; Bethke, S; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Bieniek, S P; Bierwagen, K; Biesiada, J; Biglietti, M; Bilbao De Mendizabal, J; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blanchard, J-B; Blazek, T; Bloch, I; Blocker, C; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Boddy, C R; Boehler, M; Boek, J; Boek, T T; Bogaerts, J A; Bogdanchikov, A G; Bogouch, A; Bohm, C; Bohm, J; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Borri, M; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boterenbrood, H; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boutouil, S; Boveia, A; Boyd, J; Boyko, I R; Bozovic-Jelisavcic, I; Bracinik, J; Branchini, P; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brelier, B; Brendlinger, K; Brennan, A J; Brenner, R; Bressler, S; Bristow, K; Bristow, T M; Britton, D; Brochu, F M; Brock, I; Brock, R; Bromberg, C; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Brown, G; Brown, J; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Bryngemark, L; Buanes, T; Buat, Q; Bucci, F; Buchholz, P; Buckingham, R M; Buckley, A G; Buda, S I; Budagov, I A; Buehrer, F; Bugge, L; Bugge, M K; Bulekov, O; Bundock, A C; Burckhart, H; Burdin, S; Burghgrave, B; Burke, S; Burmeister, I; Busato, E; Büscher, D; Büscher, V; Bussey, P; Buszello, C P; Butler, B; Butler, J M; Butt, A I; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Byszewski, M; Cabrera Urbán, S; Caforio, D; Cakir, O; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Calvet, D; Calvet, S; Camacho Toro, R; Camarda, S; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Campoverde, A; Canale, V; Canepa, A; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, A A; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Castaneda-Miranda, E; Castelli, A; Castillo Gimenez, V; Castro, N F; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Cattani, G; Caughron, S; Cavaliere, V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerio, B; Cerny, K; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cerv, M; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chan, K; Chang, P; Chapleau, B; Chapman, J D; Charfeddine, D; Charlton, D G; Chau, C C; Chavez Barajas, C A; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, K; Chen, L; Chen, S; Chen, X; Chen, Y; Cheng, H C; Cheng, Y; Cheplakov, A; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Chiefari, G; Childers, J T; Chilingarov, A; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Chouridou, S; Chow, B K B; Christidi, I A; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Ciftci, R; Cinca, D; Cindro, V; Ciocio, A; Cirkovic, P; Citron, Z H; Citterio, M; Ciubancan, M; Clark, A; Clark, P J; Clarke, R N; Cleland, W; Clemens, J C; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Coggeshall, J; Cole, B; Cole, S; Colijn, A P; Collins-Tooth, C; Collot, J; Colombo, T; Colon, G; Compostella, G; Conde Muiño, P; Coniavitis, E; Conidi, M C; Connell, S H; Connelly, I A; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cooper-Smith, N J; Copic, K; Cornelissen, T; Corradi, M; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Cree, G; Crépé-Renaudin, S; Crescioli, F; Crispin Ortuzar, M; Cristinziani, M; Croft, V; Crosetti, G; Cuciuc, C-M; Almenar, C Cuenca; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Cuthbert, C; Czirr, H; Czodrowski, P; Czyczula, Z; D'Auria, S; D'Onofrio, M; Cunha Sargedas De Sousa, M J Da; Via, C Da; Dabrowski, W; Dafinca, A; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Daniells, A C; Dano Hoffmann, M; Dao, V; Darbo, G; Darlea, G L; Darmora, S; Dassoulas, J A; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, E; Davies, M; Davignon, O; Davison, A R; Davison, P; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; de Graat, J; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; De Zorzi, G; Dearnaley, W J; Debbe, R; Debenedetti, C; Dechenaux, B; Dedovich, D V; Degenhardt, J; Deigaard, I; Del Peso, J; Del Prete, T; Deliot, F; Delitzsch, C M; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Domenico, A; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Di Valentino, D; Diaz, M A; Diehl, E B; Dietrich, J; Dietzsch, T A; Diglio, S; Dimitrievska, A; Dingfelder, J; Dionisi, C; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; do Vale, M A B; Do Valle Wemans, A; Doan, T K O; Dobos, D; Dobson, E; Doglioni, C; Doherty, T; Dohmae, T; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Anjos, A Dos; Dova, M T; Doyle, A T; Dris, M; Dubbert, J; Dube, S; Dubreuil, E; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Dudziak, F; Duflot, L; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Durglishvili, A; Dwuznik, M; Dyndal, M; Ebke, J; Edson, W; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Endo, M; Engelmann, R; Erdmann, J; Ereditato, A; Eriksson, D; Ernis, G; Ernst, J; Ernst, M; Ernwein, J; Errede, D; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Fehling-Kaschek, M; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Fernandez Perez, S; Ferrag, S; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, J; Fisher, W C; Fitzgerald, E A; Flechl, M; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Flick, T; Floderus, A; Flores Castillo, L R; Florez Bustos, A C; Flowerdew, M J; Formica, A; Forti, A; Fortin, D; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Franchino, S; Francis, D; Franklin, M; Franz, S; Fraternali, M; French, S T; Friedrich, C; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gadatsch, S; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallo, V; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Gandrajula, R P; Gao, J; Gao, Y S; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Gellerstedt, K; Gemme, C; Gemmell, A; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gershon, A; Ghazlane, H; Ghodbane, N; Giacobbe, B; Giagu, S; Giangiobbe, V; Giannetti, P; Gianotti, F; Gibbard, B; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giordano, R; Giorgi, F M; Giraud, P F; Giugni, D; Giuliani, C; Giulini, M; Gjelsten, B K; Gkialas, I; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Glonti, G L; Goblirsch-Kolb, M; Goddard, J R; Godfrey, J; Godlewski, J; Goeringer, C; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gomez Fajardo, L S; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez Silva, M L; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorfine, G; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Gouighri, M; Goujdami, D; Goulette, M P; Goussiou, A G; Goy, C; Gozpinar, S; Grabas, H M X; Graber, L; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramling, J; Gramstad, E; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Graziani, E; Grebenyuk, O G; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grigalashvili, N; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grishkevich, Y V; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Grossi, G C; Groth-Jensen, J; Grout, Z J; Grybel, K; Guan, L; Guescini, F; Guest, D; Gueta, O; Guicheney, C; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Gunther, J; Guo, J; Gupta, S; Gutierrez, P; Gutierrez Ortiz, N G; Gutschow, C; Guttman, N; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Haddad, N; Haefner, P; Hageboeck, S; Hajduk, Z; Hakobyan, H; Haleem, M; Hall, D; Halladjian, G; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamilton, S; Hamnett, P G; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Hanke, P; Hansen, J B; Hansen, J D; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Harkusha, S; Harper, D; Harrington, R D; Harris, O M; Harrison, P F; Hartjes, F; Hasegawa, S; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauschild, M; Hauser, R; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayashi, T; Hayden, D; Hays, C P; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, L; Heisterkamp, S; Hejbal, J; Helary, L; Heller, C; Heller, M; Hellman, S; Hellmich, D; Helsens, C; Henderson, J; Henderson, R C W; Hengler, C; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Hensel, C; Herbert, G H; Hernández Jiménez, Y; Herrberg-Schubert, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hickling, R; Higón-Rodriguez, E; Hill, J C; Hiller, K H; Hillert, S; Hillier, S J; Hinchliffe, I; Hines, E; Hirose, M; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoffman, J; Hoffmann, D; Hofmann, J I; Hohlfeld, M; Holmes, T R; Hong, T M; Hooft van Huysduynen, L; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hsu, P J; Hsu, S-C; Hu, D; Hu, X; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Hurwitz, M; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Idarraga, J; Ideal, E; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikematsu, K; Ikeno, M; Iliadis, D; Ilic, N; Inamaru, Y; Ince, T; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Iturbe Ponce, J M; Ivarsson, J; Ivashin, A V; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jackson, B; Jackson, J N; Jackson, M; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansen, H; Janssen, J; Janus, M; Jarlskog, G; Javadov, N; Javůrek, T; Jeanty, L; Jeng, G-Y; Jennens, D; Jenni, P; Jentzsch, J; Jeske, C; Jézéquel, S; Ji, H; Ji, W; Jia, J; Jiang, Y; Jimenez Belenguer, M; Jin, S; Jinaru, A; Jinnouchi, O; Joergensen, M D; Johansson, K E; Johansson, P; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Jongmanns, J; Jorge, P M; Joshi, K D; Jovicevic, J; Ju, X; Jung, C A; Jungst, R M; Jussel, P; Juste Rozas, A; Kaci, M; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kajomovitz, E; Kama, S; Kanaya, N; Kaneda, M; Kaneti, S; Kanno, T; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kar, D; Karakostas, K; Karastathis, N; Karnevskiy, M; Karpov, S N; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kasieczka, G; Kass, R D; Kastanas, A; Kataoka, Y; Katre, A; Katzy, J; Kaushik, V; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Kazarinov, M Y; Keeler, R; Keener, P T; Kehoe, R; Keil, M; Keller, J S; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Kessoku, K; Keung, J; Khalil-Zada, F; Khandanyan, H; Khanov, A; Khodinov, A; Khomich, A; Khoo, T J; Khoriauli, G; Khoroshilov, A; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H Y; Kim, H; Kim, S H; Kimura, N; Kind, O; King, B T; King, M; King, R S B; King, S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kitamura, T; Kittelmann, T; Kiuchi, K; Kladiva, E; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klioutchnikova, T; Klok, P F; Kluge, E-E; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koevesarki, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Koletsou, I; Koll, J; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; König, S; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Korotkov, V A; Kortner, O; Kortner, S; Kostyukhin, V V; Kotov, S; Kotov, V M; Kotwal, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kral, V; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Kruker, T; Krumnack, N; Krumshteyn, Z V; Kruse, A; Kruse, M C; Kruskal, M; Kubota, T; Kuday, S; Kuehn, S; Kugel, A; Kuhl, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunkle, J; Kupco, A; Kurashige, H; Kurochkin, Y A; Kurumida, R; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; La Rosa, A; La Rotonda, L; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Laier, H; Lambourne, L; Lammers, S; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lange, C; Lankford, A J; Lanni, F; Lantzsch, K; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Le, B T; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, H; Lee, J S H; Lee, S C; Lee, L; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Lehmacher, M; Lehmann Miotto, G; Lei, X; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzen, G; Lenzi, B; Leone, R; Leonhardt, K; Leontsinis, S; Leroy, C; Lester, C G; Lester, C M; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Lewis, A; Lewis, G H; Leyko, A M; Leyton, M; Li, B; Li, B; Li, H; Li, H L; Li, L; Li, S; Li, Y; Liang, Z; Liao, H; Liberti, B; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Limper, M; Lin, S C; Linde, F; Lindquist, B E; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, M; Liu, Y; Livan, M; Livermore, S S A; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loddenkoetter, T; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Loh, C W; Lohse, T; Lohwasser, K; Lokajicek, M; Lombardo, V P; Long, B A; Long, J D; Long, R E; Lopes, L; Lopez Mateos, D; Lopez Paredes, B; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Lou, X; Lounis, A; Love, J; Love, P A; Lowe, A J; Lu, F; Lubatti, H J; Luci, C; Lucotte, A; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Lungwitz, M; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Machado Miguens, J; Macina, D; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeno, M; Maeno, T; Magradze, E; Mahboubi, K; Mahlstedt, J; Mahmoud, S; Maiani, C; Maidantchik, C; Maio, A; Majewski, S; Makida, Y; Makovec, N; Mal, P; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V M; Malyukov, S; Mamuzic, J; Mandelli, B; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Manhaes de Andrade Filho, L; Manjarres Ramos, J A; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mapelli, L; March, L; Marchand, J F; Marchiori, G; Marcisovsky, M; Marino, C P; Marques, C N; Marroquim, F; Marsden, S P; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, B; Martin, J P; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, H; Martinez, M; Martin-Haugh, S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Matricon, P; Matsunaga, H; Matsushita, T; Mättig, P; Mättig, S; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazzaferro, L; Mc Goldrick, G; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; Mclaughlan, T; McMahon, S J; McPherson, R A; Meade, A; Mechnich, J; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Melachrinos, C; Mellado Garcia, B R; Meloni, F; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mergelmeyer, S; Meric, N; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Merritt, H; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Middleton, R P; Migas, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Miller, D W; Mills, C; Milov, A; Milstead, D A; Milstein, D; Minaenko, A A; Moya, M Miñano; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mirabelli, G; Mitani, T; Mitrevski, J; Mitsou, V A; Mitsui, S; Miucci, A; Miyagawa, P S; Mjörnmark, J U; Moa, T; Mochizuki, K; Moeller, V; Mohapatra, S; Mohr, W; Molander, S; Moles-Valls, R; Mönig, K; Monini, C; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Moraes, A; Morange, N; Morel, J; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Morvaj, L; Moser, H G; Mosidze, M; Moss, J; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, K; Mueller, T; Mueller, T; Muenstermann, D; Munwes, Y; Murillo Quijada, J A; Murray, W J; Musheghyan, H; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagel, M; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Nanava, G; Narayan, R; Nattermann, T; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Negri, A; Negri, G; Negrini, M; Nektarijevic, S; Nelson, A; Nelson, T K; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newcomer, F M; Newman, P R; Nguyen, D H; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolics, K; Nikolopoulos, K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nodulman, L; Nomachi, M; Nomidis, I; Norberg, S; Nordberg, M; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Nunes Hanninger, G; Nunnemann, T; Nurse, E; Nuti, F; O'Brien, B J; O'grady, F; O'Neil, D C; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, M I; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Ohshima, T; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olchevski, A G; Olivares Pino, S A; Oliveira Damazio, D; Oliver Garcia, E; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Otero Y Garzon, G; Otono, H; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Ovcharova, A; Owen, M; Ozcan, V E; Ozturk, N; Pachal, K; Pacheco Pages, A; Padilla Aranda, C; Pagáčová, M; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palestini, S; Pallin, D; Palma, A; Palmer, J D; Pan, Y B; Panagiotopoulou, E; Panduro Vazquez, J G; Pani, P; Panikashvili, N; Panitkin, S; Pantea, D; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Paredes Hernandez, D; Parker, M A; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passaggio, S; Passeri, A; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Patricelli, S; Pauly, T; Pearce, J; Pedersen, M; Pedraza Lopez, S; Pedro, R; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penwell, J; Perepelitsa, D V; Perez Codina, E; Pérez García-Estañ, M T; Perez Reale, V; Perini, L; Pernegger, H; Perrino, R; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, J; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Petteni, M; Pettersson, N E; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinder, A; Pinfold, J L; Pingel, A; Pinto, B; Pires, S; Pitt, M; Pizio, C; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Poddar, S; Podlyski, F; Poettgen, R; Poggioli, L; Pohl, D; Pohl, M; Polesello, G; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Portell Bueso, X; Pospelov, G E; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Pralavorio, P; Pranko, A; Prasad, S; Pravahan, R; Prell, S; Price, D; Price, J; Price, L E; Prieur, D; Primavera, M; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Przybycien, M; Przysiezniak, H; Ptacek, E; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quarrie, D R; Quayle, W B; Quilty, D; Qureshi, A; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Rajagopalan, S; Rammensee, M; Randle-Conde, A S; Rangel-Smith, C; Rao, K; Rauscher, F; Rave, T C; Ravenscroft, T; Raymond, M; Read, A L; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Rehnisch, L; Reinsch, A; Reisin, H; Relich, M; Rembser, C; Ren, Z L; Renaud, A; Rescigno, M; Resconi, S; Resende, B; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Ridel, M; Rieck, P; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Rodrigues, L; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Romeo, G; Romero Adam, E; Rompotis, N; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, M; Rosendahl, P L; Rosenthal, O; Rossetti, V; Rossi, E; Rossi, L P; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sacerdoti, S; Saddique, A; Sadeh, I; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Sales De Bruin, P H; Salihagic, D; Salnikov, A; Salt, J; Salvachua Ferrando, B M; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sandbach, R L; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, T; Sandoval, C; Sandstroem, R; Sankey, D P C; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Sapronov, A; Saraiva, J G; Sarrazin, B; Sartisohn, G; Sasaki, O; Sasaki, Y; Satsounkevitch, I; Sauvage, G; Sauvan, E; Savard, P; Savu, D O; Sawyer, C; Sawyer, L; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaefer, R; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Scherzer, M I; Schiavi, C; Schieck, J; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, C; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schram, M; Schramm, S; Schreyer, M; Schroeder, C; Schuh, N; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwartzman, A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciacca, F G; Scifo, E; Sciolla, G; Scott, W G; Scuri, F; Scutti, F; Searcy, J; Sedov, G; Sedykh, E; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekula, S J; Selbach, K E; Seliverstov, D M; Sellers, G; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Seuster, R; Severini, H; Sforza, F; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shank, J T; Shao, Q T; Shapiro, M; Shatalov, P B; Shaw, K; Sherwood, P; Shimizu, S; Shimmin, C O; Shimojima, M; Shiyakova, M; Shmeleva, A; Shochet, M J; Short, D; Shrestha, S; Shulga, E; Shupe, M A; Shushkevich, S; Sicho, P; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silbert, O; Silva, J; Silver, Y; Silverstein, D; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simoniello, R; Simonyan, M; Sinervo, P; Sinev, N B; Sipica, V; Siragusa, G; Sircar, A; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skottowe, H P; Skovpen, K Yu; Skubic, P; Slater, M; Slavicek, T; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snow, J; Snyder, S; Sobie, R; Socher, F; Sodomka, J; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solfaroli Camillocci, E; Solodkov, A A; Solovyanov, O V; Solovyev, V; Sommer, P; Song, H Y; Soni, N; Sood, A; Sopczak, A; Sopko, V; Sopko, B; Sorin, V; Sosebee, M; Soualah, R; Soueid, P; Soukharev, A M; South, D; Spagnolo, S; Spanò, F; Spearman, W R; Spighi, R; Spigo, G; Spousta, M; Spreitzer, T; Spurlock, B; Denis, R D St; Staerz, S; Stahlman, J; Stamen, R; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Stavina, P; Steele, G; Steinberg, P; Stekl, I; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Subramania, Hs; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, Y; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takahashi, Y; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tam, J Y C; Tamsett, M C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tanasijczuk, A J; Tani, K; Tannoury, N; Tapprogge, S; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Tavares Delgado, A; Tayalati, Y; Taylor, F E; Taylor, G N; Taylor, W; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thoma, S; Thomas, J P; Thomas-Wilsker, J; Thompson, E N; Thompson, P D; Thompson, P D; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thong, W M; Thun, R P; Tian, F; Tibbetts, M J; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Toggerson, B; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Topilin, N D; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Tran, H L; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Triplett, N; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tudorache, A; Tudorache, V; Tuna, A N; Tupputi, S A; Turchikhin, S; Turecek, D; Turk Cakir, I; Turra, R; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Uchida, K; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Urbaniec, D; Urquijo, P; Usai, G; Usanova, A; Vacavant, L; Vacek, V; Vachon, B; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Berg, R; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van der Ster, D; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vankov, P; Vannucci, F; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vazeille, F; Vazquez Schroeder, T; Veatch, J; Veloso, F; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Vigne, R; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinogradov, V B; Virzi, J; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, A; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vu Anh, T; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, W; Wagner, P; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wall, R; Waller, P; Walsh, B; Wang, C; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, X; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watanabe, I; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Webster, J S; Weidberg, A R; Weigell, P; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wendland, D; Weng, Z; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Wessels, M; Wetter, J; Whalen, K; White, A; White, M J; White, R; White, S; Whiteson, D; Wicke, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wijeratne, P A; Wildauer, A; Wildt, M A; Wilkens, H G; Will, J Z; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, J A; Wilson, A; Wingerter-Seez, I; Winklmeier, F; Wittgen, M; Wittig, T; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wright, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wulf, E; Wyatt, T R; Wynne, B M; Xella, S; Xiao, M; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yamada, M; Yamaguchi, H; Yamaguchi, Y; Yamamoto, A; Yamamoto, K; Yamamoto, S; Yamamura, T; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, U K; Yang, Y; Yanush, S; Yao, L; Yao, W-M; Yasu, Y; Yatsenko, E; Yau Wong, K H; Ye, J; Ye, S; Yen, A L; Yildirim, E; Yilmaz, M; Yoosoofmiya, R; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yurkewicz, A; Zabinski, B; Zaidan, R; Zaitsev, A M; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zaytsev, A; Zeitnitz, C; Zeman, M; Zemla, A; Zengel, K; Zenin, O; Ženiš, T; Zerwas, D; Zevi Della Porta, G; Zhang, D; Zhang, F; Zhang, H; Zhang, J; Zhang, L; Zhang, X; Zhang, Z; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, L; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, R; Zimmermann, S; Zimmermann, S; Zinonos, Z; Ziolkowski, M; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zurzolo, G; Zutshi, V; Zwalinski, L
A likelihood-based discriminant for the identification of quark- and gluon-initiated jets is built and validated using 4.7 fb[Formula: see text] of proton-proton collision data at [Formula: see text] [Formula: see text] collected with the ATLAS detector at the LHC. Data samples with enriched quark or gluon content are used in the construction and validation of templates of jet properties that are the input to the likelihood-based discriminant. The discriminating power of the jet tagger is established in both data and Monte Carlo samples within a systematic uncertainty of [Formula: see text] 10-20 %. In data, light-quark jets can be tagged with an efficiency of [Formula: see text] while achieving a gluon-jet mis-tag rate of [Formula: see text] in a [Formula: see text] range between [Formula: see text] and [Formula: see text] for jets in the acceptance of the tracker. The rejection of gluon-jets found in the data is significantly below what is attainable using a Pythia 6 Monte Carlo simulation, where gluon-jet mis-tag rates of 10 % can be reached for a 50 % selection efficiency of light-quark jets using the same jet properties.
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger
2008-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A inflight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger; Richard, Michael
2007-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A in-flight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
Ratmann, Oliver; Andrieu, Christophe; Wiuf, Carsten; Richardson, Sylvia
2009-06-30
Mathematical models are an important tool to explain and comprehend complex phenomena, and unparalleled computational advances enable us to easily explore them without any or little understanding of their global properties. In fact, the likelihood of the data under complex stochastic models is often analytically or numerically intractable in many areas of sciences. This makes it even more important to simultaneously investigate the adequacy of these models-in absolute terms, against the data, rather than relative to the performance of other models-but no such procedure has been formally discussed when the likelihood is intractable. We provide a statistical interpretation to current developments in likelihood-free Bayesian inference that explicitly accounts for discrepancies between the model and the data, termed Approximate Bayesian Computation under model uncertainty (ABCmicro). We augment the likelihood of the data with unknown error terms that correspond to freely chosen checking functions, and provide Monte Carlo strategies for sampling from the associated joint posterior distribution without the need of evaluating the likelihood. We discuss the benefit of incorporating model diagnostics within an ABC framework, and demonstrate how this method diagnoses model mismatch and guides model refinement by contrasting three qualitative models of protein network evolution to the protein interaction datasets of Helicobacter pylori and Treponema pallidum. Our results make a number of model deficiencies explicit, and suggest that the T. pallidum network topology is inconsistent with evolution dominated by link turnover or lateral gene transfer alone.
Rhodes, Kirsty M; Turner, Rebecca M; White, Ian R; Jackson, Dan; Spiegelhalter, David J; Higgins, Julian P T
2016-12-20
Many meta-analyses combine results from only a small number of studies, a situation in which the between-study variance is imprecisely estimated when standard methods are applied. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta-analysis using data augmentation, in which we represent an informative conjugate prior for between-study variance by pseudo data and use meta-regression for estimation. To assist in this, we derive predictive inverse-gamma distributions for the between-study variance expected in future meta-analyses. These may serve as priors for heterogeneity in new meta-analyses. In a simulation study, we compare approximate Bayesian methods using meta-regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta-regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta-analysis is described. The proposed method facilitates Bayesian meta-analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Bayesian multiple-source localization in an uncertain ocean environment.
Dosso, Stan E; Wilmut, Michael J
2011-06-01
This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America
Prototype Conflict Alerting Logic for Free Flight
NASA Technical Reports Server (NTRS)
Yang, Lee C.; Kuchar, James K.
1997-01-01
This paper discusses the development of a prototype alerting system for a conceptual Free Flight environment. The concept assumes that datalink between aircraft is available and that conflicts are primarily resolved on the flight deck. Four alert stages are generated depending on the likelihood of a conflict. If the conflict is not resolved by the flight crews, Air Traffic Control is notified to take over separation authority. The alerting logic is based on probabilistic analysis through modeling of aircraft sensor and trajectory uncertainties. Monte Carlo simulations were used over a range of encounter situations to determine conflict probability. The four alert stages were then defined based on probability of conflict and on the number of avoidance maneuvers available to the flight crew. Preliminary results from numerical evaluations and from a piloted simulator study at NASA Ames Research Center are summarized.
Model averaging in linkage analysis.
Matthysse, Steven
2006-06-05
Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.
Andrew D. Richardson; David Y. Hollinger; David Y. Hollinger
2005-01-01
Whether the goal is to fill gaps in the flux record, or to extract physiological parameters from eddy covariance data, researchers are frequently interested in fitting simple models of ecosystem physiology to measured data. Presently, there is no consensus on the best models to use, or the ideal optimization criteria. We demonstrate that, given our estimates of the...
Estimation of parameters of dose volume models and their confidence limits
NASA Astrophysics Data System (ADS)
van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.
2003-07-01
Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.
Fan, Ming; Thongsri, Tepwitoon; Axe, Lisa; Tyson, Trevor A
2005-06-01
A probabilistic approach was applied in an ecological risk assessment (ERA) to characterize risk and address uncertainty employing Monte Carlo simulations for assessing parameter and risk probabilistic distributions. This simulation tool (ERA) includes a Window's based interface, an interactive and modifiable database management system (DBMS) that addresses a food web at trophic levels, and a comprehensive evaluation of exposure pathways. To illustrate this model, ecological risks from depleted uranium (DU) exposure at the US Army Yuma Proving Ground (YPG) and Aberdeen Proving Ground (APG) were assessed and characterized. Probabilistic distributions showed that at YPG, a reduction in plant root weight is considered likely to occur (98% likelihood) from exposure to DU; for most terrestrial animals, likelihood for adverse reproduction effects ranges from 0.1% to 44%. However, for the lesser long-nosed bat, the effects are expected to occur (>99% likelihood) through the reduction in size and weight of offspring. Based on available DU data for the firing range at APG, DU uptake will not likely affect survival of aquatic plants and animals (<0.1% likelihood). Based on field and laboratory studies conducted at APG and YPG on pocket mice, kangaroo rat, white-throated woodrat, deer, and milfoil, body burden concentrations observed fall into the distributions simulated at both sites.
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
NASA Astrophysics Data System (ADS)
Barkley, Brett E.
A cooperative detection and tracking algorithm for multiple targets constrained to a road network is presented for fixed-wing Unmanned Air Vehicles (UAVs) with a finite field of view. Road networks of interest are formed into graphs with nodes that indicate the target likelihood ratio (before detection) and position probability (after detection). A Bayesian likelihood ratio tracker recursively assimilates target observations until the cumulative observations at a particular location pass a detection criterion. At this point, a target is considered detected and a position probability is generated for the target on the graph. Data association is subsequently used to route future measurements to update the likelihood ratio tracker (for undetected target) or to update a position probability (a previously detected target). Three strategies for motion planning of UAVs are proposed to balance searching for new targets with tracking known targets for a variety of scenarios. Performance was tested in Monte Carlo simulations for a variety of mission parameters, including tracking on road networks with varying complexity and using UAVs at various altitudes.
Kinetic analysis of single molecule FRET transitions without trajectories
NASA Astrophysics Data System (ADS)
Schrangl, Lukas; Göhring, Janett; Schütz, Gerhard J.
2018-03-01
Single molecule Förster resonance energy transfer (smFRET) is a popular tool to study biological systems that undergo topological transitions on the nanometer scale. smFRET experiments typically require recording of long smFRET trajectories and subsequent statistical analysis to extract parameters such as the states' lifetimes. Alternatively, analysis of probability distributions exploits the shapes of smFRET distributions at well chosen exposure times and hence works without the acquisition of time traces. Here, we describe a variant that utilizes statistical tests to compare experimental datasets with Monte Carlo simulations. For a given model, parameters are varied to cover the full realistic parameter space. As output, the method yields p-values which quantify the likelihood for each parameter setting to be consistent with the experimental data. The method provides suitable results even if the actual lifetimes differ by an order of magnitude. We also demonstrated the robustness of the method to inaccurately determine input parameters. As proof of concept, the new method was applied to the determination of transition rate constants for Holliday junctions.
Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods.
Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J
2017-03-03
We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.
Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods
Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J.
2017-01-01
We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter. PMID:28273796
Diffuse prior monotonic likelihood ratio test for evaluation of fused image quality measures.
Wei, Chuanming; Kaplan, Lance M; Burks, Stephen D; Blum, Rick S
2011-02-01
This paper introduces a novel method to score how well proposed fused image quality measures (FIQMs) indicate the effectiveness of humans to detect targets in fused imagery. The human detection performance is measured via human perception experiments. A good FIQM should relate to perception results in a monotonic fashion. The method computes a new diffuse prior monotonic likelihood ratio (DPMLR) to facilitate the comparison of the H(1) hypothesis that the intrinsic human detection performance is related to the FIQM via a monotonic function against the null hypothesis that the detection and image quality relationship is random. The paper discusses many interesting properties of the DPMLR and demonstrates the effectiveness of the DPMLR test via Monte Carlo simulations. Finally, the DPMLR is used to score FIQMs with test cases considering over 35 scenes and various image fusion algorithms.
Bayesian structural equation modeling: a more flexible representation of substantive theory.
Muthén, Bengt; Asparouhov, Tihomir
2012-09-01
This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories. The proposed Bayesian approach is particularly beneficial in applications where parameters are added to a conventional model such that a nonidentified model is obtained if maximum-likelihood estimation is applied. This approach is useful for measurement aspects of latent variable modeling, such as with confirmatory factor analysis, and the measurement part of structural equation modeling. Two application areas are studied, cross-loadings and residual correlations in confirmatory factor analysis. An example using a full structural equation model is also presented, showing an efficient way to find model misspecification. The approach encompasses 3 elements: model testing using posterior predictive checking, model estimation, and model modification. Monte Carlo simulations and real data are analyzed using Mplus. The real-data analyses use data from Holzinger and Swineford's (1939) classic mental abilities study, Big Five personality factor data from a British survey, and science achievement data from the National Educational Longitudinal Study of 1988.
Multiple-hit parameter estimation in monolithic detectors.
Hunter, William C J; Barrett, Harrison H; Lewellen, Tom K; Miyaoka, Robert S
2013-02-01
We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%-12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied.
MontePython 3: Parameter inference code for cosmology
NASA Astrophysics Data System (ADS)
Brinckmann, Thejs; Lesgourgues, Julien; Audren, Benjamin; Benabed, Karim; Prunet, Simon
2018-05-01
MontePython 3 provides numerous ways to explore parameter space using Monte Carlo Markov Chain (MCMC) sampling, including Metropolis-Hastings, Nested Sampling, Cosmo Hammer, and a Fisher sampling method. This improved version of the Monte Python (ascl:1307.002) parameter inference code for cosmology offers new ingredients that improve the performance of Metropolis-Hastings sampling, speeding up convergence and offering significant time improvement in difficult runs. Additional likelihoods and plotting options are available, as are post-processing algorithms such as Importance Sampling and Adding Derived Parameter.
pytc: Open-Source Python Software for Global Analyses of Isothermal Titration Calorimetry Data.
Duvvuri, Hiranmayi; Wheeler, Lucas C; Harms, Michael J
2018-05-08
Here we describe pytc, an open-source Python package for global fits of thermodynamic models to multiple isothermal titration calorimetry experiments. Key features include simplicity, the ability to implement new thermodynamic models, a robust maximum likelihood fitter, a fast Bayesian Markov-Chain Monte Carlo sampler, rigorous implementation, extensive documentation, and full cross-platform compatibility. pytc fitting can be done using an application program interface or via a graphical user interface. It is available for download at https://github.com/harmslab/pytc .
1981-03-03
Government Agencies. The views and conclusions contained in this document are those of the contractor and should not be interpreted as necessarily...resolving closely spaced j optical point targets are compared using Monte Carlo simulation ,esults for three different examples. It is found that the MEM is...although no direct compari- son was given. The objective of this report is to compare the capabilities of MLE and MEM in resolving two optical CSO’s
Transformations based on continuous piecewise-affine velocity fields
Freifeld, Oren; Hauberg, Soren; Batmanghelich, Kayhan; ...
2017-01-11
Here, we propose novel finite-dimensional spaces of well-behaved Rn → Rn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization overmore » monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available.« less
Diallel analysis for sex-linked and maternal effects.
Zhu, J; Weir, B S
1996-01-01
Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.
Transformations Based on Continuous Piecewise-Affine Velocity Fields
Freifeld, Oren; Hauberg, Søren; Batmanghelich, Kayhan; Fisher, Jonn W.
2018-01-01
We propose novel finite-dimensional spaces of well-behaved ℝn → ℝn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization over monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available. PMID:28092517
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
Adaptive Stress Testing of Airborne Collision Avoidance Systems
NASA Technical Reports Server (NTRS)
Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Brat, Guillaume P.; Owen, Michael P.
2015-01-01
This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.
Statistical inference for noisy nonlinear ecological dynamic systems.
Wood, Simon N
2010-08-26
Chaotic ecological dynamic systems defy conventional statistical analysis. Systems with near-chaotic dynamics are little better. Such systems are almost invariably driven by endogenous dynamic processes plus demographic and environmental process noise, and are only observable with error. Their sensitivity to history means that minute changes in the driving noise realization, or the system parameters, will cause drastic changes in the system trajectory. This sensitivity is inherited and amplified by the joint probability density of the observable data and the process noise, rendering it useless as the basis for obtaining measures of statistical fit. Because the joint density is the basis for the fit measures used by all conventional statistical methods, this is a major theoretical shortcoming. The inability to make well-founded statistical inferences about biological dynamic models in the chaotic and near-chaotic regimes, other than on an ad hoc basis, leaves dynamic theory without the methods of quantitative validation that are essential tools in the rest of biological science. Here I show that this impasse can be resolved in a simple and general manner, using a method that requires only the ability to simulate the observed data on a system from the dynamic model about which inferences are required. The raw data series are reduced to phase-insensitive summary statistics, quantifying local dynamic structure and the distribution of observations. Simulation is used to obtain the mean and the covariance matrix of the statistics, given model parameters, allowing the construction of a 'synthetic likelihood' that assesses model fit. This likelihood can be explored using a straightforward Markov chain Monte Carlo sampler, but one further post-processing step returns pure likelihood-based inference. I apply the method to establish the dynamic nature of the fluctuations in Nicholson's classic blowfly experiments.
Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...
Prompt gamma ray imaging for verification of proton boron fusion therapy: A Monte Carlo study.
Shin, Han-Back; Yoon, Do-Kun; Jung, Joo-Young; Kim, Moo-Sub; Suh, Tae Suk
2016-10-01
The purpose of this study was to verify acquisition feasibility of a single photon emission computed tomography image using prompt gamma rays for proton boron fusion therapy (PBFT) and to confirm an enhanced therapeutic effect of PBFT by comparison with conventional proton therapy without use of boron. Monte Carlo simulation was performed to acquire reconstructed image during PBFT. We acquired percentage depth dose (PDD) of the proton beams in a water phantom, energy spectrum of the prompt gamma rays, and tomographic images, including the boron uptake region (BUR; target). The prompt gamma ray image was reconstructed using maximum likelihood expectation maximisation (MLEM) with 64 projection raw data. To verify the reconstructed image, both an image profile and contrast analysis according to the iteration number were conducted. In addition, the physical distance between two BURs in the region of interest of each BUR was measured. The PDD of the proton beam from the water phantom including the BURs shows more efficient than that of conventional proton therapy on tumour region. A 719keV prompt gamma ray peak was clearly observed in the prompt gamma ray energy spectrum. The prompt gamma ray image was reconstructed successfully using 64 projections. Different image profiles including two BURs were acquired from the reconstructed image according to the iteration number. We confirmed successful acquisition of a prompt gamma ray image during PBFT. In addition, the quantitative image analysis results showed relatively good performance for further study. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William
2017-09-01
Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.
First observations of separated atmospheric νμ and ν¯μ events in the MINOS detector
NASA Astrophysics Data System (ADS)
Adamson, P.; Alexopoulos, T.; Allison, W. W. M.; Alner, G. J.; Anderson, K.; Andreopoulos, C.; Andrews, M.; Andrews, R.; Arroyo, C.; Avvakumov, S.; Ayres, D. S.; Baller, B.; Barish, B.; Barker, M. A.; Barnes, P. D., Jr.; Barr, G.; Barrett, W. L.; Beall, E.; Becker, B. R.; Belias, A.; Bergfeld, T.; Bernstein, R. H.; Bhattacharya, D.; Bishai, M.; Blake, A.; Bocean, V.; Bock, B.; Bock, G. J.; Boehm, J.; Boehnlein, D. J.; Bogert, D.; Border, P. M.; Bower, C.; Boyd, S.; Buckley-Geer, E.; Byon-Wagner, A.; Cabrera, A.; Chapman, J. D.; Chase, T. R.; Chernichenko, S. K.; Childress, S.; Choudhary, B. C.; Cobb, J. H.; Cossairt, J. D.; Courant, H.; Crane, D. A.; Culling, A. J.; Dawson, J. W.; Demuth, D. M.; de Santo, A.; Dierckxsens, M.; Diwan, M. V.; Dorman, M.; Drake, G.; Ducar, R.; Durkin, T.; Erwin, A. R.; Escobar, C. O.; Evans, J.; Fackler, O. D.; Harris, E. Falk; Feldman, G. J.; Felt, N.; Fields, T. H.; Ford, R.; Frohne, M. V.; Gallagher, H. R.; Gebhard, M.; Godley, A.; Gogos, J.; Goodman, M. C.; Gornushkin, Yu.; Gouffon, P.; Grashorn, E.; Grossman, N.; Grudzinski, J. J.; Grzelak, K.; Guarino, V.; Habig, A.; Halsall, R.; Hanson, J.; Harris, D.; Harris, P. G.; Hartnell, J.; Hartouni, E. P.; Hatcher, R.; Heller, K.; Hill, N.; Ho, Y.; Howcroft, C.; Hylen, J.; Ignatenko, M.; Indurthy, D.; Irwin, G. M.; James, C.; Jenner, L.; Jensen, D.; Joffe-Minor, T.; Kafka, T.; Kang, H. J.; Kasahara, S. M. S.; Kilmer, J.; Kim, H.; Koizumi, G.; Kopp, S.; Kordosky, M.; Koskinen, D. J.; Kostin, M.; Krakauer, D. A.; Kumaratunga, S.; Ladran, A. S.; Lang, K.; Laughton, C.; Lebedev, A.; Lee, R.; Lee, W. Y.; Libkind, M. A.; Liu, J.; Litchfield, P. J.; Litchfield, R. P.; Longley, N. P.; Lucas, P.; Luebke, W.; Madani, S.; Maher, E.; Makeev, V.; Mann, W. A.; Marchionni, A.; Marino, A. D.; Marshak, M. L.; Marshall, J. S.; McDonald, J.; McGowan, A.; Meier, J. R.; Merzon, G. I.; Messier, M. D.; Michael, D. G.; Milburn, R. H.; Miller, J. L.; Miller, W. H.; Mishra, S. R.; Miyagawa, P. S.; Moore, C.; Morfín, J.; Morse, R.; Mualem, L.; Mufson, S.; Murgia, S.; Murtagh, M. J.; Musser, J.; Naples, D.; Nelson, C.; Nelson, J. K.; Newman, H. B.; Nezrick, F.; Nichol, R. J.; Nicholls, T. C.; Ochoa-Ricoux, J. P.; Oliver, J.; Oliver, W. P.; Onuchin, V. A.; Osiecki, T.; Ospanov, R.; Paley, J.; Paolone, V.; Para, A.; Patzak, T.; Pavlovich, Z.; Pearce, G. F.; Pearson, N.; Peck, C. W.; Perry, C.; Peterson, E. A.; Petyt, D. A.; Ping, H.; Piteira, R.; Pla-Dalmau, A.; Plunkett, R. K.; Price, L. E.; Proga, M.; Pushka, D. R.; Rahman, D.; Rameika, R. A.; Raufer, T. M.; Read, A. L.; Rebel, B.; Reyna, D. E.; Rosenfeld, C.; Rubin, H. A.; Ruddick, K.; Ryabov, V. A.; Saakyan, R.; Sanchez, M. C.; Saoulidou, N.; Schneps, J.; Schoessow, P. V.; Schreiner, P.; Schwienhorst, R.; Semenov, V. K.; Seun, S.-M.; Shanahan, P.; Shield, P. D.; Smart, W.; Smirnitsky, V.; Smith, C.; Smith, P. N.; Sousa, A.; Speakman, B.; Stamoulis, P.; Stefanik, A.; Sullivan, P.; Swan, J. M.; Symes, P. A.; Tagg, N.; Talaga, R. L.; Tetteh-Lartey, E.; Thomas, J.; Thompson, J.; Thomson, M. A.; Thron, J. L.; Trendler, R.; Trevor, J.; Trostin, I.; Tsarev, V. A.; Tzanakos, G.; Urheim, J.; Vahle, P.; Vakili, M.; Vaziri, K.; Velissaris, C.; Verebryusov, V.; Viren, B.; Wai, L.; Ward, C. P.; Ward, D. R.; Watabe, M.; Weber, A.; Webb, R. C.; Wehmann, A.; West, N.; White, C.; White, R. F.; Wojcicki, S. G.; Wright, D. M.; Wu, Q. K.; Yan, W. G.; Yang, T.; Yumiceva, F. X.; Yun, J. C.; Zheng, H.; Zois, M.; Zwaska, R.
2006-04-01
The complete 5.4 kton MINOS far detector has been taking data since the beginning of August 2003 at a depth of 2070 meters water-equivalent in the Soudan mine, Minnesota. This paper presents the first MINOS observations of νμ and ν¯μ charged-current atmospheric neutrino interactions based on an exposure of 418 days. The ratio of upward- to downward-going events in the data is compared to the Monte Carlo expectation in the absence of neutrino oscillations, giving Rup/downdata/Rup/downMC=0.62-0.14+0.19(stat.)±0.02(sys.). An extended maximum likelihood analysis of the observed L/E distributions excludes the null hypothesis of no neutrino oscillations at the 98% confidence level. Using the curvature of the observed muons in the 1.3 T MINOS magnetic field νμ and ν¯μ interactions are separated. The ratio of ν¯μ to νμ events in the data is compared to the Monte Carlo expectation assuming neutrinos and antineutrinos oscillate in the same manner, giving Rν¯μ/νμdata/Rν¯μ/νμMC=0.96-0.27+0.38(stat.)±0.15(sys.), where the errors are the statistical and systematic uncertainties. Although the statistics are limited, this is the first direct observation of atmospheric neutrino interactions separately for νμ and ν¯μ.
NASA Astrophysics Data System (ADS)
Stark, C. P.; Rudd, S.; Lall, U.; Hovius, N.; Dadson, S.; Chen, M.-C.
Off-Axis DOAS measurements with non-artificial scattered light, based upon the renowned DOAS technique, allow to optimize the sensitivity of the technique for the trace gas profile in question by strongly increasing the light's path through the relevant atmosphere layers. Multi-Axis-(MAX) DOAS probe several directions simultaneously or sequentially to increase the spatial resolution. Several devices (ground based, air- borne and ship-built) are operated by our group in the framework of the SCIAMACHY validation. Radiative transfer models are an essential requirement for the interpretation of these measurements and their conversion into detailed profile data. Apart from some existing Monte Carlo Models most codes use analytical algorithms to solve the radia- tive transfer equation for given atmospheric conditions. For specific circumstances, e.g. photon scattering within clouds, these approaches are not efficient enough to pro- vide sufficient accuracy. Also horizontal gradients in atmospheric parameters have to be taken into account. To meet the needs of measurement situations for all kinds of scattered light DOAS platforms, a three dimensional full spherical Monte Carlo model was devised. Here we present Air Mass Factors (AMF) to calculate vertical column densities (VCD) from measured slant column densities (SCD). Sensitivity studies on the influence of the wavelength and telescope direction used, of the altitude of profile layers, albedo, refraction and basic aerosols are shown. Also modelled intensity series are compared with radiometer data.
NASA Astrophysics Data System (ADS)
Krestyannikov, E.; Tohka, J.; Ruotsalainen, U.
2008-06-01
This paper presents a novel statistical approach for joint estimation of regions-of-interest (ROIs) and the corresponding time-activity curves (TACs) from dynamic positron emission tomography (PET) brain projection data. It is based on optimizing the joint objective function that consists of a data log-likelihood term and two penalty terms reflecting the available a priori information about the human brain anatomy. The developed local optimization strategy iteratively updates both the ROI and TAC parameters and is guaranteed to monotonically increase the objective function. The quantitative evaluation of the algorithm is performed with numerically and Monte Carlo-simulated dynamic PET brain data of the 11C-Raclopride and 18F-FDG tracers. The results demonstrate that the method outperforms the existing sequential ROI quantification approaches in terms of accuracy, and can noticeably reduce the errors in TACs arising due to the finite spatial resolution and ROI delineation.
M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU
NASA Astrophysics Data System (ADS)
Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.
2018-04-01
Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.
Improving z-tracking accuracy in the two-photon single-particle tracking microscope.
Liu, C; Liu, Y-L; Perillo, E P; Jiang, N; Dunn, A K; Yeh, H-C
2015-10-12
Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico . Our method can be generally applied to other 3D single-particle tracking techniques.
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Improving z-tracking accuracy in the two-photon single-particle tracking microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C.; Liu, Y.-L.; Perillo, E. P.
Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we havemore » precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.« less
NASA Technical Reports Server (NTRS)
Johnson, T. J.; Harding, A. K.; Venter, C.
2012-01-01
Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaltonen, T.; /Helsinki Inst. of Phys.; Alvarez Gonzalez, B.
A precision measurement of the top quark mass m{sub t} is obtained using a sample of t{bar t} events from p{bar p} collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of m{sub t} and a parameter {Delta}{sub JES} used tomore » calibrate the jet energy scale in situ. Using a total of 1087 events, a value of m{sub t} = 173.0 {+-} 1.2 GeV/c{sup 2} is measured.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaltonen, T.; Brucken, E.; Devoto, F.
A precision measurement of the top quark mass m{sub t} is obtained using a sample of tt events from pp collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of m{sub t} and a parameter {Delta}{sub JES} used to calibrate themore » jet energy scale in situ. Using a total of 1087 events in 5.6 fb{sup -1} of integrated luminosity, a value of m{sub t}=173.0{+-}1.2 GeV/c{sup 2} is measured.« less
Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection
NASA Astrophysics Data System (ADS)
Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan
2017-08-01
Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.
A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grana, Justin; Wolpert, David; Neil, Joshua
The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less
A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks
Grana, Justin; Wolpert, David; Neil, Joshua; ...
2016-03-11
The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
Hollenbeak, Christopher S
2005-10-15
While risk-adjusted outcomes are often used to compare the performance of hospitals and physicians, the most appropriate functional form for the risk adjustment process is not always obvious for continuous outcomes such as costs. Semi-log models are used most often to correct skewness in cost data, but there has been limited research to determine whether the log transformation is sufficient or whether another transformation is more appropriate. This study explores the most appropriate functional form for risk-adjusting the cost of coronary artery bypass graft (CABG) surgery. Data included patients undergoing CABG surgery at four hospitals in the midwest and were fit to a Box-Cox model with random coefficients (BCRC) using Markov chain Monte Carlo methods. Marginal likelihoods and Bayes factors were computed to perform model comparison of alternative model specifications. Rankings of hospital performance were created from the simulation output and the rankings produced by Bayesian estimates were compared to rankings produced by standard models fit using classical methods. Results suggest that, for these data, the most appropriate functional form is not logarithmic, but corresponds to a Box-Cox transformation of -1. Furthermore, Bayes factors overwhelmingly rejected the natural log transformation. However, the hospital ranking induced by the BCRC model was not different from the ranking produced by maximum likelihood estimates of either the linear or semi-log model. Copyright (c) 2005 John Wiley & Sons, Ltd.
Arcuti, Simona; Pollice, Alessio; Ribecco, Nunziata; D'Onghia, Gianfranco
2016-03-01
We evaluate the spatiotemporal changes in the density of a particular species of crustacean known as deep-water rose shrimp, Parapenaeus longirostris, based on biological sample data collected during trawl surveys carried out from 1995 to 2006 as part of the international project MEDITS (MEDiterranean International Trawl Surveys). As is the case for many biological variables, density data are continuous and characterized by unusually large amounts of zeros, accompanied by a skewed distribution of the remaining values. Here we analyze the normalized density data by a Bayesian delta-normal semiparametric additive model including the effects of covariates, using penalized regression with low-rank thin-plate splines for nonlinear spatial and temporal effects. Modeling the zero and nonzero values by two joint processes, as we propose in this work, allows to obtain great flexibility and easily handling of complex likelihood functions, avoiding inaccurate statistical inferences due to misclassification of the high proportion of exact zeros in the model. Bayesian model estimation is obtained by Markov chain Monte Carlo simulations, suitably specifying the complex likelihood function of the zero-inflated density data. The study highlights relevant nonlinear spatial and temporal effects and the influence of the annual Mediterranean oscillations index and of the sea surface temperature on the distribution of the deep-water rose shrimp density. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multiple-Hit Parameter Estimation in Monolithic Detectors
Barrett, Harrison H.; Lewellen, Tom K.; Miyaoka, Robert S.
2014-01-01
We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%–12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied. PMID:23193231
Statistical Symbolic Execution with Informed Sampling
NASA Technical Reports Server (NTRS)
Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco
2014-01-01
Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.
Fitting and Modeling in the ASC Data Analysis Environment
NASA Astrophysics Data System (ADS)
Doe, S.; Siemiginowska, A.; Joye, W.; McDowell, J.
As part of the AXAF Science Center (ASC) Data Analysis Environment, we will provide to the astronomical community a Fitting Application. We present a design of the application in this paper. Our design goal is to give the user the flexibility to use a variety of optimization techniques (Levenberg-Marquardt, maximum entropy, Monte Carlo, Powell, downhill simplex, CERN-Minuit, and simulated annealing) and fit statistics (chi (2) , Cash, variance, and maximum likelihood); our modular design allows the user easily to add their own optimization techniques and/or fit statistics. We also present a comparison of the optimization techniques to be provided by the Application. The high spatial and spectral resolutions that will be obtained with AXAF instruments require a sophisticated data modeling capability. We will provide not only a suite of astronomical spatial and spectral source models, but also the capability of combining these models into source models of up to four data dimensions (i.e., into source functions f(E,x,y,t)). We will also provide tools to create instrument response models appropriate for each observation.
The costs of production of alternative jet fuel: A harmonized stochastic assessment.
Bann, Seamus J; Malina, Robert; Staples, Mark D; Suresh, Pooja; Pearlson, Matthew; Tyner, Wallace E; Hileman, James I; Barrett, Steven
2017-03-01
This study quantifies and compares the costs of production for six alternative jet fuel pathways using consistent financial and technical assumptions. Uncertainty was propagated through the analysis using Monte Carlo simulations. The six processes assessed were HEFA, advanced fermentation, Fischer-Tropsch, aqueous phase processing, hydrothermal liquefaction, and fast pyrolysis. The results indicate that none of the six processes would be profitable in the absence of government incentives, with HEFA using yellow grease, HEFA using tallow, and FT revealing the lowest mean jet fuel prices at $0.91/liter ($0.66/liter-$1.24/liter), $1.06/liter ($0.79/liter-$1.42/liter), and $1.15/liter ($0.95/liter-$1.39/liter), respectively. This study also quantifies plant performance in the United States with a Renewable Fuel Standard policy analysis. Results indicate that some pathways could achieve positive NPV with relatively high likelihood under existing policy supports, with HEFA and FPH revealing the highest probability of positive NPV at 94.9% and 99.7%, respectively, in the best-case scenario. Copyright © 2016 Elsevier Ltd. All rights reserved.
Markov Chain Monte Carlo in the Analysis of Single-Molecule Experimental Data
NASA Astrophysics Data System (ADS)
Kou, S. C.; Xie, X. Sunney; Liu, Jun S.
2003-11-01
This article provides a Bayesian analysis of the single-molecule fluorescence lifetime experiment designed to probe the conformational dynamics of a single DNA hairpin molecule. The DNA hairpin's conformational change is initially modeled as a two-state Markov chain, which is not observable and has to be indirectly inferred. The Brownian diffusion of the single molecule, in addition to the hidden Markov structure, further complicates the matter. We show that the analytical form of the likelihood function can be obtained in the simplest case and a Metropolis-Hastings algorithm can be designed to sample from the posterior distribution of the parameters of interest and to compute desired estiamtes. To cope with the molecular diffusion process and the potentially oscillating energy barrier between the two states of the DNA hairpin, we introduce a data augmentation technique to handle both the Brownian diffusion and the hidden Ornstein-Uhlenbeck process associated with the fluctuating energy barrier, and design a more sophisticated Metropolis-type algorithm. Our method not only increases the estimating resolution by several folds but also proves to be successful for model discrimination.
NASA Astrophysics Data System (ADS)
Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik
2018-05-01
Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.
Pal, Suvra; Balakrishnan, Narayanaswamy
2018-05-01
In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.
Beauty photoproduction using decays into electrons at HERA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chekanov, S.; Derrick, M.; Magill, S.
Photoproduction of beauty quarks in events with two jets and an electron associated with one of the jets has been studied with the ZEUS detector at HERA using an integrated luminosity of 120 pb{sup -1}. The fractions of events containing b quarks, and also of events containing c quarks, were extracted from a likelihood fit using variables sensitive to electron identification as well as to semileptonic decays. Total and differential cross sections for beauty and charm production were measured and compared with next-to-leading-order QCD calculations and Monte Carlo models.
NASA Astrophysics Data System (ADS)
Lee, H.; Sheen, D.; Kim, S.
2013-12-01
The b-value in Gutenberg-Richter relation is an important parameter widely used not only in the interpretation of regional tectonic structure but in the seismic hazard analysis. In this study, we tested four methods for estimating the stable b-value in a small number of events using Monte-Carlo method. One is the Least-Squares method (LSM) which minimizes the observation error. Others are based on the Maximum Likelihood method (MLM) which maximizes the likelihood function: Utsu's (1965) method for continuous magnitudes and an infinite maximum magnitude, Page's (1968) for continuous magnitudes and a finite maximum magnitude, and Weichert's (1980) for interval magnitude and a finite maximum magnitude. A synthetic parent population of the earthquake catalog of million events from magnitude 2.0 to 7.0 with interval of 0.1 was generated for the Monte-Carlo simulation. The sample, the number of which was increased from 25 to 1000, was extracted from the parent population randomly. The resampling procedure was applied 1000 times with different random seed numbers. The mean and the standard deviation of the b-value were estimated for each sample group that has the same number of samples. As expected, the more samples were used, the more stable b-value was obtained. However, in a small number of events, the LSM gave generally low b-value with a large standard deviation while other MLMs gave more accurate and stable values. It was found that Utsu (1965) gives the most accurate and stable b-value even in a small number of events. It was also found that the selection of the minimum magnitude could be critical for estimating the correct b-value for Utsu's (1965) method and Page's (1968) if magnitudes were binned into an interval. Therefore, we applied Utsu (1965) to estimate the b-value using two instrumental earthquake catalogs, which have events occurred around the southern part of the Korean Peninsula from 1978 to 2011. By a careful choice of the minimum magnitude, the b-values of the earthquake catalogs of the Korea Meteorological Administration and Kim (2012) are estimated to be 0.72 and 0.74, respectively.
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R
2017-07-12
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.
Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks
2016-01-01
Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330
Vectorized Monte Carlo methods for reactor lattice analysis
NASA Technical Reports Server (NTRS)
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy
NASA Astrophysics Data System (ADS)
Sharma, Sanjib
2017-08-01
Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.
Ikawa, Kazuro; Nomura, Kenichi; Morikawa, Norifumi; Ikeda, Kayo; Taniwaki, Masafumi
2009-10-01
A pharmacokinetic (PK)-pharmacodynamic (PD) analysis was conducted to assess various micafungin regimens for Candida and Aspergillus infections, as appropriate regimens have not been established, especially for Aspergillus infections. Plasma drug concentrations (48 samples from 10 adult patients with haematological malignancies) were determined chromatographically, and used for population PK modelling and Monte Carlo simulation to evaluate the ability of regimens (1 h infusions) to attain genus-dependent PK-PD targets, namely fungistatic and fungicidal targets against Candida spp. [area under the plasma unbound (1%) drug concentration-time curve over 24 h/MIC (fAUC/MIC) = 10 and 20] and an effective concentration target against Aspergillus spp. (plasma unbound drug concentration = 0.05 mg/L). Mean (variance) values for two-compartment PK model parameters were: clearance, 0.762 L/h (15.4%); volume of central compartment, 9.25 L (24.6%); intercompartmental clearance, 7.02 L/h (fixed); and volume of peripheral compartment, 8.86 L (71.8%). The Monte Carlo simulation demonstrated that 50 mg once daily and 100 mg once daily for the fungistatic and fungicidal targets achieved a >95% probability of target attainment against Candida spp. To achieve such probability against Aspergillus spp., 250 mg once daily or 100 mg twice daily was required. These results rationalize the approved micafungin dosages for Candida infections (50 mg once daily for prophylaxis and 100-150 mg once daily for treatment), and on the basis of these results we propose a PK-PD-based dosing strategy for Aspergillus infections. A regimen of 200-250 mg/day should be initiated to ensure the likelihood of a favourable outcome. The regimen can be optimized by decreasing the dosing interval.
Performance bounds for matched field processing in subsurface object detection applications
NASA Astrophysics Data System (ADS)
Sahin, Adnan; Miller, Eric L.
1998-09-01
In recent years there has been considerable interest in the use of ground penetrating radar (GPR) for the non-invasive detection and localization of buried objects. In a previous work, we have considered the use of high resolution array processing methods for solving these problems for measurement geometries in which an array of electromagnetic receivers observes the fields scattered by the subsurface targets in response to a plane wave illumination. Our approach uses the MUSIC algorithm in a matched field processing (MFP) scheme to determine both the range and the bearing of the objects. In this paper we derive the Cramer-Rao bounds (CRB) for this MUSIC-based approach analytically. Analysis of the theoretical CRB has shown that there exists an optimum inter-element spacing of array elements for which the CRB is minimum. Furthermore, the optimum inter-element spacing minimizing CRB is smaller than the conventional half wavelength criterion. The theoretical bounds are then verified for two estimators using Monte-Carlo simulations. The first estimator is the MUSIC-based MFP and the second one is the maximum likelihood based MFP. The two approaches differ in the cost functions they optimize. We observe that Monte-Carlo simulated error variances always lie above the values established by CRB. Finally, we evaluate the performance of our MUSIC-based algorithm in the presence of model mismatches. Since the detection algorithm strongly depends on the model used, we have tested the performance of the algorithm when the object radius used in the model is different from the true radius. This analysis reveals that the algorithm is still capable of localizing the objects with a bias depending on the degree of mismatch.
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
Probabilistic analysis of tsunami hazards
Geist, E.L.; Parsons, T.
2006-01-01
Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).
CosmoSIS: A system for MC parameter estimation
Bridle, S.; Dodelson, S.; Jennings, E.; ...
2015-12-23
CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could bemore » used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.« less
NASA Astrophysics Data System (ADS)
Bragato, P. L.
2017-10-01
The strong earthquakes that occurred in Italy between 2009 and 2016 represent an abrupt acceleration of seismicity in respect of the previous 30 years. Such behavior seems to agree with the periodic rate change I observed in a previous paper. The present work improves that study by extending the data set up to the end of 2016, adopting the latest version of the historical seismic catalog of Italy, and introducing Schuster spectrum analysis for the detection of the oscillatory period and the assessment of its statistical significance. Applied to the declustered catalog of M w ≥ 6 earthquakes that occurred between 1600 and 2016, the analysis individuates a marked periodicity of 46 years, which is recognized above the 95% confidence level. Monte Carlo simulation shows that the oscillatory behavior is stable in respect of random errors on magnitude estimation. A parametric oscillatory model for the annual rate of seismicity is estimated by likelihood maximization under the hypothesis of inhomogeneous Poisson point process. According to the Akaike Information Criterion, such model outperforms the simpler homogeneous one with constant annual rate. A further element emerges form the analysis: so far, despite recent earthquakes, the Italian seismicity is still within a long-term decreasing trend established since the first half of the twentieth century.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Dan; Ricciuto, Daniel M.; Walker, Anthony P.
Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results inmore » a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. Here, the result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.« less
Monte Carlo simulations on marker grouping and ordering.
Wu, J; Jenkins, J; Zhu, J; McCarty, J; Watson, C
2003-08-01
Four global algorithms, maximum likelihood (ML), sum of adjacent LOD score (SALOD), sum of adjacent recombinant fractions (SARF) and product of adjacent recombinant fraction (PARF), and one approximation algorithm, seriation (SER), were used to compare the marker ordering efficiencies for correctly given linkage groups based on doubled haploid (DH) populations. The Monte Carlo simulation results indicated the marker ordering powers for the five methods were almost identical. High correlation coefficients were greater than 0.99 between grouping power and ordering power, indicating that all these methods for marker ordering were reliable. Therefore, the main problem for linkage analysis was how to improve the grouping power. Since the SER approach provided the advantage of speed without losing ordering power, this approach was used for detailed simulations. For more generality, multiple linkage groups were employed, and population size, linkage cutoff criterion, marker spacing pattern (even or uneven), and marker spacing distance (close or loose) were considered for obtaining acceptable grouping powers. Simulation results indicated that the grouping power was related to population size, marker spacing distance, and cutoff criterion. Generally, a large population size provided higher grouping power than small population size, and closely linked markers provided higher grouping power than loosely linked markers. The cutoff criterion range for achieving acceptable grouping power and ordering power differed for varying cases; however, combining all situations in this study, a cutoff criterion ranging from 50 cM to 60 cM was recommended for achieving acceptable grouping power and ordering power for different cases.
Lu, Dan; Ricciuto, Daniel M.; Walker, Anthony P.; ...
2017-09-27
Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results inmore » a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. Here, the result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.« less
Chakraborty, Arindom
2016-12-01
A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.
Spectral decompositions of multiple time series: a Bayesian non-parametric approach.
Macaro, Christian; Prado, Raquel
2014-01-01
We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.
Interferometric superlocalization of two incoherent optical point sources.
Nair, Ranjith; Tsang, Mankei
2016-02-22
A novel interferometric method - SLIVER (Super Localization by Image inVERsion interferometry) - is proposed for estimating the separation of two incoherent point sources with a mean squared error that does not deteriorate as the sources are brought closer. The essential component of the interferometer is an image inversion device that inverts the field in the transverse plane about the optical axis, assumed to pass through the centroid of the sources. The performance of the device is analyzed using the Cramér-Rao bound applied to the statistics of spatially-unresolved photon counting using photon number-resolving and on-off detectors. The analysis is supported by Monte-Carlo simulations of the maximum likelihood estimator for the source separation, demonstrating the superlocalization effect for separations well below that set by the Rayleigh criterion. Simulations indicating the robustness of SLIVER to mismatch between the optical axis and the centroid are also presented. The results are valid for any imaging system with a circularly symmetric point-spread function.
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.
A baseline-free procedure for transformation models under interval censorship.
Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin
2005-12-01
An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.
Change-in-ratio estimators for populations with more than two subclasses
Udevitz, Mark S.; Pollock, Kenneth H.
1991-01-01
Change-in-ratio methods have been developed to estimate the size of populations with two or three population subclasses. Most of these methods require the often unreasonable assumption of equal sampling probabilities for individuals in all subclasses. This paper presents new models based on the weaker assumption that ratios of sampling probabilities are constant over time for populations with three or more subclasses. Estimation under these models requires that a value be assumed for one of these ratios when there are two samples. Explicit expressions are given for the maximum likelihood estimators under models for two samples with three or more subclasses and for three samples with two subclasses. A numerical method using readily available statistical software is described for obtaining the estimators and their standard errors under all of the models. Likelihood ratio tests that can be used in model selection are discussed. Emphasis is on the two-sample, three-subclass models for which Monte-Carlo simulation results and an illustrative example are presented.
Cohn, Timothy A.
2005-01-01
This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.
Bayesian inference for OPC modeling
NASA Astrophysics Data System (ADS)
Burbine, Andrew; Sturtevant, John; Fryer, David; Smith, Bruce W.
2016-03-01
The use of optical proximity correction (OPC) demands increasingly accurate models of the photolithographic process. Model building and inference techniques in the data science community have seen great strides in the past two decades which make better use of available information. This paper aims to demonstrate the predictive power of Bayesian inference as a method for parameter selection in lithographic models by quantifying the uncertainty associated with model inputs and wafer data. Specifically, the method combines the model builder's prior information about each modelling assumption with the maximization of each observation's likelihood as a Student's t-distributed random variable. Through the use of a Markov chain Monte Carlo (MCMC) algorithm, a model's parameter space is explored to find the most credible parameter values. During parameter exploration, the parameters' posterior distributions are generated by applying Bayes' rule, using a likelihood function and the a priori knowledge supplied. The MCMC algorithm used, an affine invariant ensemble sampler (AIES), is implemented by initializing many walkers which semiindependently explore the space. The convergence of these walkers to global maxima of the likelihood volume determine the parameter values' highest density intervals (HDI) to reveal champion models. We show that this method of parameter selection provides insights into the data that traditional methods do not and outline continued experiments to vet the method.
Estimating Divergence Parameters With Small Samples From a Large Number of Loci
Wang, Yong; Hey, Jody
2010-01-01
Most methods for studying divergence with gene flow rely upon data from many individuals at few loci. Such data can be useful for inferring recent population history but they are unlikely to contain sufficient information about older events. However, the growing availability of genome sequences suggests a different kind of sampling scheme, one that may be more suited to studying relatively ancient divergence. Data sets extracted from whole-genome alignments may represent very few individuals but contain a very large number of loci. To take advantage of such data we developed a new maximum-likelihood method for genomic data under the isolation-with-migration model. Unlike many coalescent-based likelihood methods, our method does not rely on Monte Carlo sampling of genealogies, but rather provides a precise calculation of the likelihood by numerical integration over all genealogies. We demonstrate that the method works well on simulated data sets. We also consider two models for accommodating mutation rate variation among loci and find that the model that treats mutation rates as random variables leads to better estimates. We applied the method to the divergence of Drosophila melanogaster and D. simulans and detected a low, but statistically significant, signal of gene flow from D. simulans to D. melanogaster. PMID:19917765
Likelihood of achieving air quality targets under model uncertainties.
Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W
2011-01-01
Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.
Inferring epidemiological parameters from phylogenetic information for the HIV-1 epidemic among MSM
NASA Astrophysics Data System (ADS)
Quax, Rick; van de Vijver, David A. M. C.; Frentz, Dineke; Sloot, Peter M. A.
2013-09-01
The HIV-1 epidemic in Europe is primarily sustained by a dynamic topology of sexual interactions among MSM who have individual immune systems and behavior. This epidemiological process shapes the phylogeny of the virus population. Both fields of epidemic modeling and phylogenetics have a long history, however it remains difficult to use phylogenetic data to infer epidemiological parameters such as the structure of the sexual network and the per-act infectiousness. This is because phylogenetic data is necessarily incomplete and ambiguous. Here we show that the cluster-size distribution indeed contains information about epidemiological parameters using detailed numberical experiments. We simulate the HIV epidemic among MSM many times using the Monte Carlo method with all parameter values and their ranges taken from literature. For each simulation and the corresponding set of parameter values we calculate the likelihood of reproducing an observed cluster-size distribution. The result is an estimated likelihood distribution of all parameters from the phylogenetic data, in particular the structure of the sexual network, the per-act infectiousness, and the risk behavior reduction upon diagnosis. These likelihood distributions encode the knowledge provided by the observed cluster-size distrbution, which we quantify using information theory. Our work suggests that the growing body of genetic data of patients can be exploited to understand the underlying epidemiological process.
Approximate Bayesian computation for forward modeling in cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akeret, Joël; Refregier, Alexandre; Amara, Adam
Bayesian inference is often used in cosmology and astrophysics to derive constraints on model parameters from observations. This approach relies on the ability to compute the likelihood of the data given a choice of model parameters. In many practical situations, the likelihood function may however be unavailable or intractable due to non-gaussian errors, non-linear measurements processes, or complex data formats such as catalogs and maps. In these cases, the simulation of mock data sets can often be made through forward modeling. We discuss how Approximate Bayesian Computation (ABC) can be used in these cases to derive an approximation to themore » posterior constraints using simulated data sets. This technique relies on the sampling of the parameter set, a distance metric to quantify the difference between the observation and the simulations and summary statistics to compress the information in the data. We first review the principles of ABC and discuss its implementation using a Population Monte-Carlo (PMC) algorithm and the Mahalanobis distance metric. We test the performance of the implementation using a Gaussian toy model. We then apply the ABC technique to the practical case of the calibration of image simulations for wide field cosmological surveys. We find that the ABC analysis is able to provide reliable parameter constraints for this problem and is therefore a promising technique for other applications in cosmology and astrophysics. Our implementation of the ABC PMC method is made available via a public code release.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1982-01-01
A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.
Probabilistic Modeling of the Renal Stone Formation Module
NASA Technical Reports Server (NTRS)
Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.
2013-01-01
The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously randomly sampling the probability distributions of the electrolyte concentrations and system parameters that are inputs into the deterministic model. The total urine chemistry concentrations are used to determine the urine chemistry activity using the Joint Expert Speciation System (JESS), a biochemistry model. Information used from JESS is then fed into the deterministic growth model. Outputs from JESS and the deterministic model are passed back to the probabilistic model where a multivariate regression is used to assess the likelihood of a stone forming and the likelihood of a stone requiring clinical intervention. The parameters used to determine to quantify these risks include: relative supersaturation (RS) of calcium oxalate, citrate/calcium ratio, crystal number density, total urine volume, pH, magnesium excretion, maximum stone width, and ureteral location. Methods and Validation: The RSFM is designed to perform a Monte Carlo simulation to generate probability distributions of clinically significant renal stones, as well as provide an associated uncertainty in the estimate. Initially, early versions will be used to test integration of the components and assess component validation and verification (V&V), with later versions used to address questions regarding design reference mission scenarios. Once integrated with the deterministic component, the credibility assessment of the integrated model will follow NASA STD 7009 requirements.
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
ANTARES constrains a blazar origin of two IceCube PeV neutrino events
NASA Astrophysics Data System (ADS)
ANTARES Collaboration; Adrián-Martínez, S.; Albert, A.; André, M.; Anton, G.; Ardid, M.; Aubert, J.-J.; Baret, B.; Barrios, J.; Basa, S.; Bertin, V.; Biagi, S.; Bogazzi, C.; Bormuth, R.; Bou-Cabo, M.; Bouwhuis, M. C.; Bruijn, R.; Brunner, J.; Busto, J.; Capone, A.; Caramete, L.; Carr, J.; Chiarusi, T.; Circella, M.; Coniglione, R.; Costantini, H.; Coyle, P.; Creusot, A.; De Rosa, G.; Dekeyser, I.; Deschamps, A.; De Bonis, G.; Distefano, C.; Donzaud, C.; Dornic, D.; Dorosti, Q.; Drouhin, D.; Dumas, A.; Eberl, T.; Enzenhöfer, A.; Escoffier, S.; Fehn, K.; Felis, I.; Fermani, P.; Folger, F.; Fusco, L. A.; Galatà, S.; Gay, P.; Geißelsöder, S.; Geyer, K.; Giordano, V.; Gleixner, A.; Gómez-González, J. P.; Gracia-Ruiz, R.; Graf, K.; van Haren, H.; Heijboer, A. J.; Hello, Y.; Hernández-Rey, J. J.; Herrero, A.; Hößl, J.; Hofestädt, J.; Hugon, C.; James, C. W.; de Jong, M.; Kalekin, O.; Katz, U.; Kießling, D.; Kooijman, P.; Kouchner, A.; Kulikovskiy, V.; Lahmann, R.; Lattuada, D.; Lefèvre, D.; Leonora, E.; Loehner, H.; Loucatos, S.; Mangano, S.; Marcelin, M.; Margiotta, A.; Martínez-Mora, J. A.; Martini, S.; Mathieu, A.; Michael, T.; Migliozzi, P.; Neff, M.; Nezri, E.; Palioselitis, D.; Păvălaş, G. E.; Perrina, C.; Piattelli, P.; Popa, V.; Pradier, T.; Racca, C.; Riccobene, G.; Richter, R.; Roensch, K.; Rostovtsev, A.; Saldaña, M.; Samtleben, D. F. E.; Sánchez-Losa, A.; Sanguineti, M.; Sapienza, P.; Schmid, J.; Schnabel, J.; Schulte, S.; Schüssler, F.; Seitz, T.; Sieger, C.; Spies, A.; Spurio, M.; Steijger, J. J. M.; Stolarczyk, Th.; Taiuti, M.; Tamburini, C.; Tayalati, Y.; Trovato, A.; Tselengidou, M.; Tönnis, C.; Vallage, B.; Vallée, C.; Van Elewyck, V.; Visser, E.; Vivolo, D.; Wagner, S.; de Wolf, E.; Yepes, H.; Zornoza, J. D.; Zúñiga, J.; TANAMI Collaboration; Krauß, F.; Kadler, M.; Mannheim, K.; Schulz, R.; Trüstedt, J.; Wilms, J.; Ojha, R.; Ros, E.; Baumgartner, W.; Beuchert, T.; Blanchard, J.; Bürkel, C.; Carpenter, B.; Edwards, P. G.; Eisenacher Glawion, D.; Elsässer, D.; Fritsch, U.; Gehrels, N.; Gräfe, C.; Großberger, C.; Hase, H.; Horiuchi, S.; Kappes, A.; Kreikenbohm, A.; Kreykenbohm, I.; Langejahn, M.; Leiter, K.; Litzinger, E.; Lovell, J. E. J.; Müller, C.; Phillips, C.; Plötz, C.; Quick, J.; Steinbring, T.; Stevens, J.; Thompson, D. J.; Tzioumis, A. K.
2015-04-01
Context. The source(s) of the neutrino excess reported by the IceCube Collaboration is unknown. The TANAMI Collaboration recently reported on the multiwavelength emission of six bright, variable blazars which are positionally coincident with two of the most energetic IceCube events. Objects like these are prime candidates to be the source of the highest-energy cosmic rays, and thus of associated neutrino emission. Aims: We present an analysis of neutrino emission from the six blazars using observations with the ANTARES neutrino telescope. Methods: The standard methods of the ANTARES candidate list search are applied to six years of data to search for an excess of muons - and hence their neutrino progenitors - from the directions of the six blazars described by the TANAMI Collaboration, and which are possibly associated with two IceCube events. Monte Carlo simulations of the detector response to both signal and background particle fluxes are used to estimate the sensitivity of this analysis for different possible source neutrino spectra. A maximum-likelihood approach, using the reconstructed energies and arrival directions of through-going muons, is used to identify events with properties consistent with a blazar origin. Results: Both blazars predicted to be the most neutrino-bright in the TANAMI sample (1653-329 and 1714-336) have a signal flux fitted by the likelihood analysis corresponding to approximately one event. This observation is consistent with the blazar-origin hypothesis of the IceCube event IC 14 for a broad range of blazar spectra, although an atmospheric origin cannot be excluded. No ANTARES events are observed from any of the other four blazars, including the three associated with IceCube event IC20. This excludes at a 90% confidence level the possibility that this event was produced by these blazars unless the neutrino spectrum is flatter than -2.4. Figures 2, 3 and Appendix A are available in electronic form at http://www.aanda.org
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
NASA Technical Reports Server (NTRS)
Bonamente, Massimiliano; Joy, Marshall K.; Carlstrom, John E.; LaRoque, Samuel J.
2004-01-01
X-ray and Sunyaev-Zeldovich Effect data ca,n be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from the Chandra Observatory, which provides both spatial and spectral information, and interferometric radio measurements of the Sunyam-Zeldovich Effect are available from the BIMA and 0VR.O arrays. We introduce a Monte Carlo Markov chain procedure for the joint analysis of X-ray and Sunyaev-Zeldovich Effect data. The advantages of this method are the high computational efficiency and the ability to measure the full probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and the cluster distance. We apply this technique to the Chandra X-ray data and the OVRO radio data for the galaxy cluster Abell 611. Comparisons with traditional likelihood-ratio methods reveal the robustness of the method. This method will be used in a follow-up paper to determine the distance of a large sample of galaxy clusters for which high-resolution Chandra X-ray and BIMA/OVRO radio data are available.
Sandoval-Castellanos, Edson; Palkopoulou, Eleftheria; Dalén, Love
2014-01-01
Inference of population demographic history has vastly improved in recent years due to a number of technological and theoretical advances including the use of ancient DNA. Approximate Bayesian computation (ABC) stands among the most promising methods due to its simple theoretical fundament and exceptional flexibility. However, limited availability of user-friendly programs that perform ABC analysis renders it difficult to implement, and hence programming skills are frequently required. In addition, there is limited availability of programs able to deal with heterochronous data. Here we present the software BaySICS: Bayesian Statistical Inference of Coalescent Simulations. BaySICS provides an integrated and user-friendly platform that performs ABC analyses by means of coalescent simulations from DNA sequence data. It estimates historical demographic population parameters and performs hypothesis testing by means of Bayes factors obtained from model comparisons. Although providing specific features that improve inference from datasets with heterochronous data, BaySICS also has several capabilities making it a suitable tool for analysing contemporary genetic datasets. Those capabilities include joint analysis of independent tables, a graphical interface and the implementation of Markov-chain Monte Carlo without likelihoods.
NASA Astrophysics Data System (ADS)
Mondal, Puskar; Korenaga, Jun
2018-03-01
The dispersion relation of the Rayleigh-Taylor instability, a gravitational instability associated with unstable density stratification, is of profound importance in various geophysical contexts. When more than two layers are involved, a semi-analytical technique based on the biharmonic formulation of Stokes flow has been extensively used to obtain such dispersion relation. However, this technique may become cumbersome when applied to lithospheric dynamics, where a number of layers are necessary to represent the continuous variation of viscosity over many orders of magnitude. Here, we present an alternative and more efficient method based on the propagator matrix formulation of Stokes flow. With this approach, the original instability problem is reduced to a compact eigenvalue equation whose size is solely determined by the number of primary density contrasts. We apply this new technique to the stability of the early crust, and combined with the Monte Carlo sensitivity analysis, we derive an empirical formula to compute the growth rate of the Rayleigh-Taylor instability for this particular geophysical setting. Our analysis indicates that the likelihood of crustal delamination hinges critically on the effective viscosity of eclogite.
Mapping of epistatic quantitative trait loci in four-way crosses.
He, Xiao-Hong; Qin, Hongde; Hu, Zhongli; Zhang, Tianzhen; Zhang, Yuan-Ming
2011-01-01
Four-way crosses (4WC) involving four different inbred lines often appear in plant and animal commercial breeding programs. Direct mapping of quantitative trait loci (QTL) in these commercial populations is both economical and practical. However, the existing statistical methods for mapping QTL in a 4WC population are built on the single-QTL genetic model. This simple genetic model fails to take into account QTL interactions, which play an important role in the genetic architecture of complex traits. In this paper, therefore, we attempted to develop a statistical method to detect epistatic QTL in 4WC population. Conditional probabilities of QTL genotypes, computed by the multi-point single locus method, were used to sample the genotypes of all putative QTL in the entire genome. The sampled genotypes were used to construct the design matrix for QTL effects. All QTL effects, including main and epistatic effects, were simultaneously estimated by the penalized maximum likelihood method. The proposed method was confirmed by a series of Monte Carlo simulation studies and real data analysis of cotton. The new method will provide novel tools for the genetic dissection of complex traits, construction of QTL networks, and analysis of heterosis.
Causal mediation analysis with a latent mediator.
Albert, Jeffrey M; Geng, Cuiyu; Nelson, Suchitra
2016-05-01
Health researchers are often interested in assessing the direct effect of a treatment or exposure on an outcome variable, as well as its indirect (or mediation) effect through an intermediate variable (or mediator). For an outcome following a nonlinear model, the mediation formula may be used to estimate causally interpretable mediation effects. This method, like others, assumes that the mediator is observed. However, as is common in structural equations modeling, we may wish to consider a latent (unobserved) mediator. We follow a potential outcomes framework and assume a generalized structural equations model (GSEM). We provide maximum-likelihood estimation of GSEM parameters using an approximate Monte Carlo EM algorithm, coupled with a mediation formula approach to estimate natural direct and indirect effects. The method relies on an untestable sequential ignorability assumption; we assess robustness to this assumption by adapting a recently proposed method for sensitivity analysis. Simulation studies show good properties of the proposed estimators in plausible scenarios. Our method is applied to a study of the effect of mother education on occurrence of adolescent dental caries, in which we examine possible mediation through latent oral health behavior. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V.; Petway, Joy R.
2017-01-01
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH3-N and NO3-N. Results indicate that the integrated FME-GLUE-based model, with good Nash–Sutcliffe coefficients (0.53–0.69) and correlation coefficients (0.76–0.83), successfully simulates the concentrations of ON-N, NH3-N and NO3-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH3-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO3-N simulation, which was measured using global sensitivity. PMID:28704958
Monte Carlo simulation: Its status and future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murtha, J.A.
1997-04-01
Monte Carlo simulation is a statistics-based analysis tool that yields probability-vs.-value relationships for key parameters, including oil and gas reserves, capital exposure, and various economic yardsticks, such as net present value (NPV) and return on investment (ROI). Monte Carlo simulation is a part of risk analysis and is sometimes performed in conjunction with or as an alternative to decision [tree] analysis. The objectives are (1) to define Monte Carlo simulation in a more general context of risk and decision analysis; (2) to provide some specific applications, which can be interrelated; (3) to respond to some of the criticisms; (4) tomore » offer some cautions about abuses of the method and recommend how to avoid the pitfalls; and (5) to predict what the future has in store.« less
Bayesian Parameter Inference and Model Selection by Population Annealing in Systems Biology
Murakami, Yohei
2014-01-01
Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor. PMID:25089832
Stochastic generation of hourly rainstorm events in Johor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nojumuddin, Nur Syereena; Yusof, Fadhilah; Yusop, Zulkifli
2015-02-03
Engineers and researchers in water-related studies are often faced with the problem of having insufficient and long rainfall record. Practical and effective methods must be developed to generate unavailable data from limited available data. Therefore, this paper presents a Monte-Carlo based stochastic hourly rainfall generation model to complement the unavailable data. The Monte Carlo simulation used in this study is based on the best fit of storm characteristics. Hence, by using the Maximum Likelihood Estimation (MLE) and Anderson Darling goodness-of-fit test, lognormal appeared to be the best rainfall distribution. Therefore, the Monte Carlo simulation based on lognormal distribution was usedmore » in the study. The proposed model was verified by comparing the statistical moments of rainstorm characteristics from the combination of the observed rainstorm events under 10 years and simulated rainstorm events under 30 years of rainfall records with those under the entire 40 years of observed rainfall data based on the hourly rainfall data at the station J1 in Johor over the period of 1972–2011. The absolute percentage error of the duration-depth, duration-inter-event time and depth-inter-event time will be used as the accuracy test. The results showed the first four product-moments of the observed rainstorm characteristics were close with the simulated rainstorm characteristics. The proposed model can be used as a basis to derive rainfall intensity-duration frequency in Johor.« less
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
A robust algorithm for automated target recognition using precomputed radar cross sections
NASA Astrophysics Data System (ADS)
Ehrman, Lisa M.; Lanterman, Aaron D.
2004-09-01
Passive radar is an emerging technology that offers a number of unique benefits, including covert operation. Many such systems are already capable of detecting and tracking aircraft. The goal of this work is to develop a robust algorithm for adding automated target recognition (ATR) capabilities to existing passive radar systems. In previous papers, we proposed conducting ATR by comparing the precomputed RCS of known targets to that of detected targets. To make the precomputed RCS as accurate as possible, a coordinated flight model is used to estimate aircraft orientation. Once the aircraft's position and orientation are known, it is possible to determine the incident and observed angles on the aircraft, relative to the transmitter and receiver. This makes it possible to extract the appropriate radar cross section (RCS) from our simulated database. This RCS is then scaled to account for propagation losses and the receiver's antenna gain. A Rician likelihood model compares these expected signals from different targets to the received target profile. We have previously employed Monte Carlo runs to gauge the probability of error in the ATR algorithm; however, generation of a statistically significant set of Monte Carlo runs is computationally intensive. As an alternative to Monte Carlo runs, we derive the relative entropy (also known as Kullback-Liebler distance) between two Rician distributions. Since the probability of Type II error in our hypothesis testing problem can be expressed as a function of the relative entropy via Stein's Lemma, this provides us with a computationally efficient method for determining an upper bound on our algorithm's performance. It also provides great insight into the types of classification errors we can expect from our algorithm. This paper compares the numerically approximated probability of Type II error with the results obtained from a set of Monte Carlo runs.
Statistical power calculations for mixed pharmacokinetic study designs using a population approach.
Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel
2014-09-01
Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less
Estimating river discharge uncertainty by applying the Rating Curve Model
NASA Astrophysics Data System (ADS)
Barbetta, S.; Melone, F.; Franchini, M.; Moramarco, T.
2012-04-01
The knowledge of the flow discharge at a river site is necessary for planning and management of water resources as well as for monitoring and real-time forecasting purposes when significant flood events occur. In the hydrological practice, the operational discharge measurement in medium and large rivers is mostly based on indirect approaches by converting the observed stage into discharge values using steady-flow rating curves. However, the stage-discharge relationship can be unknown for hydrometric sections where flow velocity measurements, particularly during high floods, are not available. To overcome this issue, a simplified approach named Rating Curve Model (RCM) and proposed by Moramarco et al. (Moramarco, T., Barbetta, S., F. Melone, F. & Singh, V.P., Relating local stage and remote discharge with significant lateral inflow, J. Hydrol. Engng ASCE, 10[1], 58?69, 2005) can be conveniently used. RCM turned out able to assess, with a high level of accuracy, the discharge hydrograph at a river site where only the stage is monitored while the flow is recorded at a different section along the river, even when significant lateral flows occur. The simple structure of the model is depending on three parameters of which two can be considered characteristic of the river reach and one of the wave travel time of floods. Considering that RCM well lends itself to predict the stage-discharge relationship at a river site wherein only stages are recorded, an uncertainty analysis on river discharge estimate is of interest for the hydrological practice definitely. To this aim, the uncertainty characterizing the RCM outcomes is addressed in this work by considering two different procedures based on the Monte Carlo approach and the Generalized Likelihood Uncertainty Estimation (GLUE) method, respectively. The statistical distribution of parameters is found and a random re-sampling of parameters is done for assessing the 90% confidence interval (CI) of discharge estimates. In particular, for the latter approach the Nash-Sutcliffe coefficient is used as likelihood measure. Two equipped river reaches of the Upper-Middle Tiber River basin, central Italy, are investigated as case studies. The results provided by the selected methodologies are discussed and compared showing that all the computed CIs are satisfied in term of percentage of included observed discharges with similar percentages characterizing the bands assessed by both Monte Carlo approach and GLUE procedure.
On the validity of cosmological Fisher matrix forecasts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolz, Laura; Kilbinger, Martin; Weller, Jochen
2012-09-01
We present a comparison of Fisher matrix forecasts for cosmological probes with Monte Carlo Markov Chain (MCMC) posterior likelihood estimation methods. We analyse the performance of future Dark Energy Task Force (DETF) stage-III and stage-IV dark-energy surveys using supernovae, baryon acoustic oscillations and weak lensing as probes. We concentrate in particular on the dark-energy equation of state parameters w{sub 0} and w{sub a}. For purely geometrical probes, and especially when marginalising over w{sub a}, we find considerable disagreement between the two methods, since in this case the Fisher matrix can not reproduce the highly non-elliptical shape of the likelihood function.more » More quantitatively, the Fisher method underestimates the marginalized errors for purely geometrical probes between 30%-70%. For cases including structure formation such as weak lensing, we find that the posterior probability contours from the Fisher matrix estimation are in good agreement with the MCMC contours and the forecasted errors only changing on the 5% level. We then explore non-linear transformations resulting in physically-motivated parameters and investigate whether these parameterisations exhibit a Gaussian behaviour. We conclude that for the purely geometrical probes and, more generally, in cases where it is not known whether the likelihood is close to Gaussian, the Fisher matrix is not the appropriate tool to produce reliable forecasts.« less
Austin, Peter C
2010-04-22
Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.
Pal, Suvra; Balakrishnan, N
2017-10-01
In this paper, we consider a competing cause scenario and assume the number of competing causes to follow a Conway-Maxwell Poisson distribution which can capture both over and under dispersion that is usually encountered in discrete data. Assuming the population of interest having a component cure and the form of the data to be interval censored, as opposed to the usually considered right-censored data, the main contribution is in developing the steps of the expectation maximization algorithm for the determination of the maximum likelihood estimates of the model parameters of the flexible Conway-Maxwell Poisson cure rate model with Weibull lifetimes. An extensive Monte Carlo simulation study is carried out to demonstrate the performance of the proposed estimation method. Model discrimination within the Conway-Maxwell Poisson distribution is addressed using the likelihood ratio test and information-based criteria to select a suitable competing cause distribution that provides the best fit to the data. A simulation study is also carried out to demonstrate the loss in efficiency when selecting an improper competing cause distribution which justifies the use of a flexible family of distributions for the number of competing causes. Finally, the proposed methodology and the flexibility of the Conway-Maxwell Poisson distribution are illustrated with two known data sets from the literature: smoking cessation data and breast cosmesis data.
NASA Astrophysics Data System (ADS)
Jaranowski, Piotr; Królak, Andrzej
2000-03-01
We develop the analytic and numerical tools for data analysis of the continuous gravitational-wave signals from spinning neutron stars for ground-based laser interferometric detectors. The statistical data analysis method that we investigate is maximum likelihood detection which for the case of Gaussian noise reduces to matched filtering. We study in detail the statistical properties of the optimum functional that needs to be calculated in order to detect the gravitational-wave signal and estimate its parameters. We find it particularly useful to divide the parameter space into elementary cells such that the values of the optimal functional are statistically independent in different cells. We derive formulas for false alarm and detection probabilities both for the optimal and the suboptimal filters. We assess the computational requirements needed to do the signal search. We compare a number of criteria to build sufficiently accurate templates for our data analysis scheme. We verify the validity of our concepts and formulas by means of the Monte Carlo simulations. We present algorithms by which one can estimate the parameters of the continuous signals accurately. We find, confirming earlier work of other authors, that given a 100 Gflops computational power an all-sky search for observation time of 7 days and directed search for observation time of 120 days are possible whereas an all-sky search for 120 days of observation time is computationally prohibitive.
An empirical Bayes approach for the Poisson life distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1973-01-01
A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Mapping quantitative trait loci for binary trait in the F2:3 design.
Zhu, Chengsong; Zhang, Yuan-Ming; Guo, Zhigang
2008-12-01
In the analysis of inheritance of quantitative traits with low heritability, an F(2:3) design that genotypes plants in F(2) and phenotypes plants in F(2:3) progeny is often used in plant genetics. Although statistical approaches for mapping quantitative trait loci (QTL) in the F(2:3) design have been well developed, those for binary traits of biological interest and economic importance are seldom addressed. In this study, an attempt was made to map binary trait loci (BTL) in the F(2:3) design. The fundamental idea was: the F(2) plants were genotyped, all phenotypic values of each F(2:3) progeny were measured for binary trait, and these binary trait values and the marker genotype informations were used to detect BTL under the penetrance and liability models. The proposed method was verified by a series of Monte-Carlo simulation experiments. These results showed that maximum likelihood approaches under the penetrance and liability models provide accurate estimates for the effects and the locations of BTL with high statistical power, even under of low heritability. Moreover, the penetrance model is as efficient as the liability model, and the F(2:3) design is more efficient than classical F(2) design, even though only a single progeny is collected from each F(2:3) family. With the maximum likelihood approaches under the penetrance and the liability models developed in this study, we can map binary traits as we can do for quantitative trait in the F(2:3) design.
BAO from Angular Clustering: Optimization and Mitigation of Theoretical Systematics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crocce, M.; et al.
We study the theoretical systematics and optimize the methodology in Baryon Acoustic Oscillations (BAO) detections using the angular correlation function with tomographic bins. We calibrate and optimize the pipeline for the Dark Energy Survey Year 1 dataset using 1800 mocks. We compare the BAO fitting results obtained with three estimators: the Maximum Likelihood Estimator (MLE), Profile Likelihood, and Markov Chain Monte Carlo. The MLE method yields the least bias in the fit results (bias/spreadmore » $$\\sim 0.02$$) and the error bar derived is the closest to the Gaussian results (1% from 68% Gaussian expectation). When there is mismatch between the template and the data either due to incorrect fiducial cosmology or photo-$z$ error, the MLE again gives the least-biased results. The BAO angular shift that is estimated based on the sound horizon and the angular diameter distance agree with the numerical fit. Various analysis choices are further tested: the number of redshift bins, cross-correlations, and angular binning. We propose two methods to correct the mock covariance when the final sample properties are slightly different from those used to create the mock. We show that the sample changes can be accommodated with the help of the Gaussian covariance matrix or more effectively using the eigenmode expansion of the mock covariance. The eigenmode expansion is significantly less susceptible to statistical fluctuations relative to the direct measurements of the covariance matrix because the number of free parameters is substantially reduced [$p$ parameters versus $p(p+1)/2$ from direct measurement].« less
Ou, Lu; Chow, Sy-Miin; Ji, Linying; Molenaar, Peter C M
2017-01-01
The autoregressive latent trajectory (ALT) model synthesizes the autoregressive model and the latent growth curve model. The ALT model is flexible enough to produce a variety of discrepant model-implied change trajectories. While some researchers consider this a virtue, others have cautioned that this may confound interpretations of the model's parameters. In this article, we show that some-but not all-of these interpretational difficulties may be clarified mathematically and tested explicitly via likelihood ratio tests (LRTs) imposed on the initial conditions of the model. We show analytically the nested relations among three variants of the ALT model and the constraints needed to establish equivalences. A Monte Carlo simulation study indicated that LRTs, particularly when used in combination with information criterion measures, can allow researchers to test targeted hypotheses about the functional forms of the change process under study. We further demonstrate when and how such tests may justifiably be used to facilitate our understanding of the underlying process of change using a subsample (N = 3,995) of longitudinal family income data from the National Longitudinal Survey of Youth.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
Statistical Signal Processing and the Motor Cortex
Brockwell, A.E.; Kass, R.E.; Schwartz, A.B.
2011-01-01
Over the past few decades, developments in technology have significantly improved the ability to measure activity in the brain. This has spurred a great deal of research into brain function and its relation to external stimuli, and has important implications in medicine and other fields. As a result of improved understanding of brain function, it is now possible to build devices that provide direct interfaces between the brain and the external world. We describe some of the current understanding of function of the motor cortex region. We then discuss a typical likelihood-based state-space model and filtering based approach to address the problems associated with building a motor cortical-controlled cursor or robotic prosthetic device. As a variation on previous work using this approach, we introduce the idea of using Markov chain Monte Carlo methods for parameter estimation in this context. By doing this instead of performing maximum likelihood estimation, it is possible to expand the range of possible models that can be explored, at a cost in terms of computational load. We demonstrate results obtained applying this methodology to experimental data gathered from a monkey. PMID:21765538
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
DarkBit: a GAMBIT module for computing dark matter observables and likelihoods
NASA Astrophysics Data System (ADS)
Bringmann, Torsten; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Kahlhoefer, Felix; Kvellestad, Anders; Putze, Antje; Savage, Christopher; Scott, Pat; Weniger, Christoph; White, Martin; Wild, Sebastian
2017-12-01
We introduce DarkBit, an advanced software code for computing dark matter constraints on various extensions to the Standard Model of particle physics, comprising both new native code and interfaces to external packages. This release includes a dedicated signal yield calculator for gamma-ray observations, which significantly extends current tools by implementing a cascade-decay Monte Carlo, as well as a dedicated likelihood calculator for current and future experiments ( gamLike). This provides a general solution for studying complex particle physics models that predict dark matter annihilation to a multitude of final states. We also supply a direct detection package that models a large range of direct detection experiments ( DDCalc), and that provides the corresponding likelihoods for arbitrary combinations of spin-independent and spin-dependent scattering processes. Finally, we provide custom relic density routines along with interfaces to DarkSUSY, micrOMEGAs, and the neutrino telescope likelihood package nulike. DarkBit is written in the framework of the Global And Modular Beyond the Standard Model Inference Tool ( GAMBIT), providing seamless integration into a comprehensive statistical fitting framework that allows users to explore new models with both particle and astrophysics constraints, and a consistent treatment of systematic uncertainties. In this paper we describe its main functionality, provide a guide to getting started quickly, and show illustrative examples for results obtained with DarkBit (both as a stand-alone tool and as a GAMBIT module). This includes a quantitative comparison between two of the main dark matter codes ( DarkSUSY and micrOMEGAs), and application of DarkBit 's advanced direct and indirect detection routines to a simple effective dark matter model.
Mitchell, Jonathan S.; Chang, Jonathan
2017-01-01
Bayesian analysis of macroevolutionary mixtures (BAMM) is a statistical framework that uses reversible jump Markov chain Monte Carlo to infer complex macroevolutionary dynamics of diversification and phenotypic evolution on phylogenetic trees. A recent article by Moore et al. (MEA) reported a number of theoretical and practical concerns with BAMM. Major claims from MEA are that (i) BAMM’s likelihood function is incorrect, because it does not account for unobserved rate shifts; (ii) the posterior distribution on the number of rate shifts is overly sensitive to the prior; and (iii) diversification rate estimates from BAMM are unreliable. Here, we show that these and other conclusions from MEA are generally incorrect or unjustified. We first demonstrate that MEA’s numerical assessment of the BAMM likelihood is compromised by their use of an invalid likelihood function. We then show that “unobserved rate shifts” appear to be irrelevant for biologically plausible parameterizations of the diversification process. We find that the purportedly extreme prior sensitivity reported by MEA cannot be replicated with standard usage of BAMM v2.5, or with any other version when conventional Bayesian model selection is performed. Finally, we demonstrate that BAMM performs very well at estimating diversification rate variation across the \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${\\sim}$\\end{document}20% of simulated trees in MEA’s data set for which it is theoretically possible to infer rate shifts with confidence. Due to ascertainment bias, the remaining 80% of their purportedly variable-rate phylogenies are statistically indistinguishable from those produced by a constant-rate birth–death process and were thus poorly suited for the summary statistics used in their performance assessment. We demonstrate that inferences about diversification rates have been accurate and consistent across all major previous releases of the BAMM software. We recognize an acute need to address the theoretical foundations of rate-shift models for phylogenetic trees, and we expect BAMM and other modeling frameworks to improve in response to mathematical and computational innovations. However, we remain optimistic that that the imperfect tools currently available to comparative biologists have provided and will continue to provide important insights into the diversification of life on Earth. PMID:28334223
Rabosky, Daniel L; Mitchell, Jonathan S; Chang, Jonathan
2017-07-01
Bayesian analysis of macroevolutionary mixtures (BAMM) is a statistical framework that uses reversible jump Markov chain Monte Carlo to infer complex macroevolutionary dynamics of diversification and phenotypic evolution on phylogenetic trees. A recent article by Moore et al. (MEA) reported a number of theoretical and practical concerns with BAMM. Major claims from MEA are that (i) BAMM's likelihood function is incorrect, because it does not account for unobserved rate shifts; (ii) the posterior distribution on the number of rate shifts is overly sensitive to the prior; and (iii) diversification rate estimates from BAMM are unreliable. Here, we show that these and other conclusions from MEA are generally incorrect or unjustified. We first demonstrate that MEA's numerical assessment of the BAMM likelihood is compromised by their use of an invalid likelihood function. We then show that "unobserved rate shifts" appear to be irrelevant for biologically plausible parameterizations of the diversification process. We find that the purportedly extreme prior sensitivity reported by MEA cannot be replicated with standard usage of BAMM v2.5, or with any other version when conventional Bayesian model selection is performed. Finally, we demonstrate that BAMM performs very well at estimating diversification rate variation across the ${\\sim}$20% of simulated trees in MEA's data set for which it is theoretically possible to infer rate shifts with confidence. Due to ascertainment bias, the remaining 80% of their purportedly variable-rate phylogenies are statistically indistinguishable from those produced by a constant-rate birth-death process and were thus poorly suited for the summary statistics used in their performance assessment. We demonstrate that inferences about diversification rates have been accurate and consistent across all major previous releases of the BAMM software. We recognize an acute need to address the theoretical foundations of rate-shift models for phylogenetic trees, and we expect BAMM and other modeling frameworks to improve in response to mathematical and computational innovations. However, we remain optimistic that that the imperfect tools currently available to comparative biologists have provided and will continue to provide important insights into the diversification of life on Earth. © The Author(s) 2017. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Pei, Yanbo; Tian, Guo-Liang; Tang, Man-Lai
2014-11-10
Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel-Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi-center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.
Numerical modelling of glacial lake outburst floods using physically based dam-breach models
NASA Astrophysics Data System (ADS)
Westoby, M. J.; Brasington, J.; Glasser, N. F.; Hambrey, M. J.; Reynolds, J. M.; Hassan, M. A. A. M.; Lowe, A.
2015-03-01
The instability of moraine-dammed proglacial lakes creates the potential for catastrophic glacial lake outburst floods (GLOFs) in high-mountain regions. In this research, we use a unique combination of numerical dam-breach and two-dimensional hydrodynamic modelling, employed within a generalised likelihood uncertainty estimation (GLUE) framework, to quantify predictive uncertainty in model outputs associated with a reconstruction of the Dig Tsho failure in Nepal. Monte Carlo analysis was used to sample the model parameter space, and morphological descriptors of the moraine breach were used to evaluate model performance. Multiple breach scenarios were produced by differing parameter ensembles associated with a range of breach initiation mechanisms, including overtopping waves and mechanical failure of the dam face. The material roughness coefficient was found to exert a dominant influence over model performance. The downstream routing of scenario-specific breach hydrographs revealed significant differences in the timing and extent of inundation. A GLUE-based methodology for constructing probabilistic maps of inundation extent, flow depth, and hazard is presented and provides a useful tool for communicating uncertainty in GLOF hazard assessment.
VizieR Online Data Catalog: Kepler pipeline transit signal recovery. III. (Christiansen+, 2016)
NASA Astrophysics Data System (ADS)
Christiansen, J. L.; Clarke, B. D.; Burke, C. J.; Jenkins, J. M.; Bryson, S. T.; Coughlin, J. L.; Mullally, F.; Thompson, S. E.; Twicken, J. D.; Batalha, N. M.; Haas, M. R.; Catanzarite, J.; Campbell, J. R.; Uddin, A. K.; Zamudio, K.; Smith, J. C.; Henze, C. E.
2018-03-01
Here we describe the third transit injection experiment, which tests the entire Kepler observing baseline (Q1-Q17) for the first time across all 84 CCD channels. It was performed to measure the sensitivity of the Kepler pipeline used to generate the Q1-Q17 Data Release 24 (DR24) catalog of Kepler Objects of Interest (Coughlin et al. 2016, J/ApJS/224/12) available at the NASA Exoplanet Archive (Akeson et al. 2013PASP..125..989A). The average detection efficiency describes the likelihood that the Kepler pipeline would successfully recover a given transit signal. To measure this property we perform a Monte Carlo experiment where we inject the signatures of simulated transiting planets around 198154 target stars, one per star, across the focal plane starting with the Q1-Q17 DR24 calibrated pixels. The simulated transits are generated using the Mandel & Agol (2002ApJ...580L.171M) model. Of the injections, 159013 resulted in three or more injected transits (the minimum required for detection by the pipeline) and were used for the subsequent analysis. (1 data file).
Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept
NASA Technical Reports Server (NTRS)
Thipphavong, David
2010-01-01
Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.
NASA Technical Reports Server (NTRS)
Bennett, C. L.; Kogut, A.; Hinshaw, G.; Banday, A. J.; Wright, E. L.; Gorski, K. M.; Wilkinson, D. T.; Weiss, R.; Smoot, G. F.; Meyer, S. S.
1994-01-01
The first two years of Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) observations of the cosmic microwave background (CMB) anisotropy are analyzed and compared with our previously published first year results. The results are consistent, but the addition of the second year of data increases the precision and accuracy detected CMB temperature fluctuations. The 2 yr 53 GHz data are characterized by rms temperature fluctuations of (delta-T)(sub rms) (7 deg) = 44 +/- 7 micro-K and (delta-T)(sub rms) (10 deg) = 30.5 +/- 2.7 micro-K at 7 deg and 10 deg angular resolution, respectively. The 53 x 90 GHz cross-correlation amplitude at zero lag is C(0)(sup 1/2) = 36 +/- 5 micro-K (68% CL) for the unsmoothed (7 deg resolution) DMR data. We perform a likelihood analysis of the cross-correlation function, with Monte Carlo simulations to infer biases of the method, for a power-law model of initial density fluctuations, P(k) proportional to R(exp n). The Monte Carlo simulations indicate that derived estimates of n are biased by +0.11 +/- 0.01, while the subset of simulations with a low quadrupole (as observed) indicate a bias of +0.31+/- 0.04. Derived values for 68% confidence intervals are given corrected (and not corrected) for our estimated biases. Including the quadrupole anisotropy, the most likely quadrupole-normalized amplitude is Q(sub rms-PS) = 14.3(sup + 5.2 sub -3.3) micro-K (12.8(sup + 5.2 sub -3.3) micro-K0 with a spectral index n = 1.42(sup + 0.49 sub -0.55)(n = 1.53(sup + 0.49 sub -0.55). With n fixed to 1.0 the most likely amplitude is 18.2 +/- 11.5 micro-K (17.4 +/- 1.5 micro-K). The marginal likelihood of n is 1.42 +/- 0.37 (1.53 +/- 0.37). Excluding the quadrupole anisotropy, the most likely quadrupole-normalized amplitude is Q(sub rms-PS) = 17.4(sup + 7.5 sub -5.2) micro-K (15.8(sup + 7.5 sub -5.2) micro-K) with a spectral index n = 1.11(sup + 0.60 sub -0.55) (n = 1.22(sup + 0.60 sub -0.55). With n fixed to 1.0 the most likely amplitude is 18.6 +/- 1.6 micro-K (18.2 +/- 1.6 micro-K). The marginal likelihood of n is 1.11 +/- 0.40 (1.22 +/- 0.40). Our best estimate of the dipole from the 2 yr DMR data is 3.363 +/- 0.024 mK toward Galactic coordinates (l, b) = (264.4 deg +/- 0.2 deg, 48.1 deg +/- 0.4 deg), and our best estimate of the rms quadrupole amplitude in our sky is 6 +/- 3 micro-K (68% CL).
US Transuranium and Uranium Registries case study on accidental exposure to uranium hexafluoride.
Avtandilashvili, Maia; Puncher, Matthew; McComish, Stacey L; Tolmachev, Sergei Y
2015-03-01
The United States Transuranium and Uranium Registries' (USTUR) whole-body donor (Case 1031) was exposed to an acute inhalation of uranium hexafluoride (UF6) produced from an explosion at a uranium processing plant 65 years prior to his death. The USTUR measurements of tissue samples collected at the autopsy indicated long-term retention of inhaled slightly enriched uranium material (0.85% (235)U) in the deep lungs and thoracic lymph nodes. In the present study, the authors combined the tissue measurement results with historical bioassay data, and analysed them with International Commission on Radiological Protection (ICRP) respiratory tract models and the ICRP Publication 69 systemic model for uranium using maximum likelihood and Bayesian statistical methods. The purpose of the analysis was to estimate intakes and model parameter values that best describe the data, and evaluate their effect on dose assessment. The maximum likelihood analysis, which used the ICRP Publication 66 human respiratory tract model, resulted in a point estimate of 79 mg of uranium for the occupational intake composed of 86% soluble, type F material and 14% insoluble, type S material. For the Bayesian approach, the authors applied the Markov Chain Monte Carlo method, but this time used the revised human respiratory tract model, which is currently being used by ICRP to calculate new dose coefficients for workers. The Bayesian analysis estimated that the mean uranium intake was 160 mg, and calculated the case-specific lung dissolution parameters with their associated uncertainties. The parameters were consistent with the inhaled uranium material being predominantly soluble with a small but significant insoluble component. The 95% posterior range of the rapid dissolution fraction (the fraction of deposited material that is absorbed to blood rapidly) was 0.12 to 0.91 with a median of 0.37. The remaining fraction was absorbed slowly, with a 95% range of 0.000 22 d(-1) to 0.000 36 d(-1) and a median of 0.000 31 d(-1). The effective dose per unit intake calculated using the dissolution parameters derived from the maximum likelihood and the Bayesian analyses was higher than the current ICRP dose coefficient for type F uranium by a factor of 2 or 7, respectively; the higher value of the latter was due to use of the revised respiratory tract model. The dissolution parameter values obtained here may be more appropriate to use for radiation protection purposes when individuals are exposed to a UF6 mixture that contains an insoluble uranium component.
Assessing the performance of a covert automatic target recognition algorithm
NASA Astrophysics Data System (ADS)
Ehrman, Lisa M.; Lanterman, Aaron D.
2005-05-01
Passive radar systems exploit illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. Doing so allows them to operate covertly and inexpensively. Our research seeks to enhance passive radar systems by adding automatic target recognition (ATR) capabilities. In previous papers we proposed conducting ATR by comparing the radar cross section (RCS) of aircraft detected by a passive radar system to the precomputed RCS of aircraft in the target class. To effectively model the low-frequency setting, the comparison is made via a Rician likelihood model. Monte Carlo simulations indicate that the approach is viable. This paper builds on that work by developing a method for quickly assessing the potential performance of the ATR algorithm without using exhaustive Monte Carlo trials. This method exploits the relation between the probability of error in a binary hypothesis test under the Bayesian framework to the Chernoff information. Since the data are well-modeled as Rician, we begin by deriving a closed-form approximation for the Chernoff information between two Rician densities. This leads to an approximation for the probability of error in the classification algorithm that is a function of the number of available measurements. We conclude with an application that would be particularly cumbersome to accomplish via Monte Carlo trials, but that can be quickly addressed using the Chernoff information approach. This application evaluates the length of time that an aircraft must be tracked before the probability of error in the ATR algorithm drops below a desired threshold.
NASA Astrophysics Data System (ADS)
Le Foll, S.; André, F.; Delmas, A.; Bouilly, J. M.; Aspa, Y.
2012-06-01
A backward Monte Carlo method for modelling the spectral directional emittance of fibrous media has been developed. It uses Mie theory to calculate the radiative properties of single fibres, modelled as infinite cylinders, and the complex refractive index is computed by a Drude-Lorenz model for the dielectric function. The absorption and scattering coefficient are homogenised over several fibres, but the scattering phase function of a single one is used to determine the scattering direction of energy inside the medium. Sensitivity analysis based on several Monte Carlo results has been performed to estimate coefficients for a Multi-Linear Model (MLM) specifically developed for inverse analysis of experimental data. This model concurs with the Monte Carlo method and is highly computationally efficient. In contrast, the surface emissivity model, which assumes an opaque medium, shows poor agreement with the reference Monte Carlo calculations.
Monte Carlo capabilities of the SCALE code system
Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...
2014-09-12
SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less
Zero-inflated spatio-temporal models for disease mapping.
Torabi, Mahmoud
2017-05-01
In this paper, our aim is to analyze geographical and temporal variability of disease incidence when spatio-temporal count data have excess zeros. To that end, we consider random effects in zero-inflated Poisson models to investigate geographical and temporal patterns of disease incidence. Spatio-temporal models that employ conditionally autoregressive smoothing across the spatial dimension and B-spline smoothing over the temporal dimension are proposed. The analysis of these complex models is computationally difficult from the frequentist perspective. On the other hand, the advent of the Markov chain Monte Carlo algorithm has made the Bayesian analysis of complex models computationally convenient. Recently developed data cloning method provides a frequentist approach to mixed models that is also computationally convenient. We propose to use data cloning, which yields to maximum likelihood estimation, to conduct frequentist analysis of zero-inflated spatio-temporal modeling of disease incidence. One of the advantages of the data cloning approach is that the prediction and corresponding standard errors (or prediction intervals) of smoothing disease incidence over space and time is easily obtained. We illustrate our approach using a real dataset of monthly children asthma visits to hospital in the province of Manitoba, Canada, during the period April 2006 to March 2010. Performance of our approach is also evaluated through a simulation study. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A general framework for the regression analysis of pooled biomarker assessments.
Liu, Yan; McMahan, Christopher; Gallagher, Colin
2017-07-10
As a cost-efficient data collection mechanism, the process of assaying pooled biospecimens is becoming increasingly common in epidemiological research; for example, pooling has been proposed for the purpose of evaluating the diagnostic efficacy of biological markers (biomarkers). To this end, several authors have proposed techniques that allow for the analysis of continuous pooled biomarker assessments. Regretfully, most of these techniques proceed under restrictive assumptions, are unable to account for the effects of measurement error, and fail to control for confounding variables. These limitations are understandably attributable to the complex structure that is inherent to measurements taken on pooled specimens. Consequently, in order to provide practitioners with the tools necessary to accurately and efficiently analyze pooled biomarker assessments, herein, a general Monte Carlo maximum likelihood-based procedure is presented. The proposed approach allows for the regression analysis of pooled data under practically all parametric models and can be used to directly account for the effects of measurement error. Through simulation, it is shown that the proposed approach can accurately and efficiently estimate all unknown parameters and is more computational efficient than existing techniques. This new methodology is further illustrated using monocyte chemotactic protein-1 data collected by the Collaborative Perinatal Project in an effort to assess the relationship between this chemokine and the risk of miscarriage. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Bayesian analysis of biogeography when the number of areas is large.
Landis, Michael J; Matzke, Nicholas J; Moore, Brian R; Huelsenbeck, John P
2013-11-01
Historical biogeography is increasingly studied from an explicitly statistical perspective, using stochastic models to describe the evolution of species range as a continuous-time Markov process of dispersal between and extinction within a set of discrete geographic areas. The main constraint of these methods is the computational limit on the number of areas that can be specified. We propose a Bayesian approach for inferring biogeographic history that extends the application of biogeographic models to the analysis of more realistic problems that involve a large number of areas. Our solution is based on a "data-augmentation" approach, in which we first populate the tree with a history of biogeographic events that is consistent with the observed species ranges at the tips of the tree. We then calculate the likelihood of a given history by adopting a mechanistic interpretation of the instantaneous-rate matrix, which specifies both the exponential waiting times between biogeographic events and the relative probabilities of each biogeographic change. We develop this approach in a Bayesian framework, marginalizing over all possible biogeographic histories using Markov chain Monte Carlo (MCMC). Besides dramatically increasing the number of areas that can be accommodated in a biogeographic analysis, our method allows the parameters of a given biogeographic model to be estimated and different biogeographic models to be objectively compared. Our approach is implemented in the program, BayArea.
No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.
van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B
2016-11-24
Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Jonathan R.; Burnett, Damon J.
Sandia National Laboratories operates the Scaled Wind Farm Technology Facility (SWiFT) on behalf of the Department of Energy Wind and Water Power Technologies Office. An analysis was performed to evaluate the hazards associated with debris thrown from one of SWiFT’s operating wind turbines, assuming a catastrophic failure. A Monte Carlo analysis was conducted to assess the complex variable space associated with debris throw hazards that included wind speed, wind direction, azimuth and pitch angles of the blade, and percentage of the blade that was separated. In addition, a set of high fidelity explicit dynamic finite element simulations were performed tomore » determine the threshold impact energy envelope for the turbine control building located on-site. Assuming that all of the layered, independent, passive and active engineered safety systems and administrative procedures failed (a 100% failure rate of the safety systems), the likelihood of the control building being struck was calculated to be less than 5/10,000 and ballistic simulations showed that the control building would not provide passive protection for the majority of impact scenarios. Although options exist to improve the ballistic resistance of the control building, the recommendation is not to pursue them because there is a low probability of strike and there is an equal likelihood personnel could be located at similar distances in other areas of the SWiFT facility which are not passively protected, while the turbines are operating. A fenced exclusion area has been created around the turbines which restricts access to the boundary of the 1/100 strike probability. The overall recommendation is to neither relocate nor improve passive protection of the control building as the turbine safety systems have been improved to have no less than two independent, redundant, high quality engineered safety systems. Considering this, in combination with a control building strike probability of less than 5/10,000, the overall probability of turbine debris striking the control building is less than 1/1,000,000.« less
NASA Astrophysics Data System (ADS)
Sadegh, M.; Vrugt, J. A.
2011-12-01
In the past few years, several contributions have begun to appear in the hydrologic literature that introduced and analyzed the benefits of using a signature based approach to watershed analysis. This signature-based approach abandons the standard single criteria model-data fitting paradigm in favor of a diagnostic approach that better extracts the available information from the available data. Despite the prospects of this new viewpoint, rather ad-hoc criteria have hitherto been proposed to improve watershed modeling. Here, we aim to provide a proper mathematical foundation to signature based analysis. We analyze the information content of different data transformation by analyzing their convergence speed with Markov Chain Monte Carlo (MCMC) simulation using the Generalized Likelihood function of Schousp and Vrugt (2010). We compare the information content of the original discharge data against a simple square root and Box-Cox transformation of the streamflow data. We benchmark these results against wavelet and flow duration curve transformations that temporally disaggregate the discharge data. Our results conclusive demonstrate that wavelet transformations and flow duration curves significantly reduce the information content of the streamflow data and consequently unnecessarily increase the uncertainty of the HYMOD model parameters. Hydrologic signatures thus need to be found in the original data, without temporal disaggregation.
Chou, C P; Bentler, P M; Satorra, A
1991-11-01
Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.
A Bayesian Framework for Reliability Analysis of Spacecraft Deployments
NASA Technical Reports Server (NTRS)
Evans, John W.; Gallo, Luis; Kaminsky, Mark
2012-01-01
Deployable subsystems are essential to mission success of most spacecraft. These subsystems enable critical functions including power, communications and thermal control. The loss of any of these functions will generally result in loss of the mission. These subsystems and their components often consist of unique designs and applications for which various standardized data sources are not applicable for estimating reliability and for assessing risks. In this study, a two stage sequential Bayesian framework for reliability estimation of spacecraft deployment was developed for this purpose. This process was then applied to the James Webb Space Telescope (JWST) Sunshield subsystem, a unique design intended for thermal control of the Optical Telescope Element. Initially, detailed studies of NASA deployment history, "heritage information", were conducted, extending over 45 years of spacecraft launches. This information was then coupled to a non-informative prior and a binomial likelihood function to create a posterior distribution for deployments of various subsystems uSing Monte Carlo Markov Chain sampling. Select distributions were then coupled to a subsequent analysis, using test data and anomaly occurrences on successive ground test deployments of scale model test articles of JWST hardware, to update the NASA heritage data. This allowed for a realistic prediction for the reliability of the complex Sunshield deployment, with credibility limits, within this two stage Bayesian framework.
GPUs for statistical data analysis in HEP: a performance study of GooFit on GPUs vs. RooFit on CPUs
NASA Astrophysics Data System (ADS)
Pompili, Alexis; Di Florio, Adriano; CMS Collaboration
2016-10-01
In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the Jψϕ invariant mass in the three-body decay B +→JψϕK +. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerably resulting speed-up, while comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may apply or does not apply because its regularity conditions are not satisfied.
A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis
ERIC Educational Resources Information Center
Edwards, Michael C.
2010-01-01
Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, J.R.; Heger, A.S.; Koen, B.V.
1984-04-01
This report is the result of a preliminary feasibility study of the applicability of Stein and related parametric empirical Bayes (PEB) estimators to the Nuclear Plant Reliability Data System (NPRDS). A new estimator is derived for the means of several independent Poisson distributions with different sampling times. This estimator is applied to data from NPRDS in an attempt to improve failure rate estimation. Theoretical and Monte Carlo results indicate that the new PEB estimator can perform significantly better than the standard maximum likelihood estimator if the estimation of the individual means can be combined through the loss function or throughmore » a parametric class of prior distributions.« less
NASA Astrophysics Data System (ADS)
Olson, R.; Evans, J. P.; Fan, Y.
2015-12-01
NARCliM (NSW/ACT Regional Climate Modelling Project) is a regional climate project for Australia and the surrounding region. It dynamically downscales 4 General Circulation Models (GCMs) using three Regional Climate Models (RCMs) to provide climate projections for the CORDEX-AustralAsia region at 50 km resolution, and for south-east Australia at 10 km resolution. The project differs from previous work in the level of sophistication of model selection. Specifically, the selection process for GCMs included (i) conducting literature review to evaluate model performance, (ii) analysing model independence, and (iii) selecting models that span future temperature and precipitation change space. RCMs for downscaling the GCMs were chosen based on their performance for several precipitation events over South-East Australia, and on model independence.Bayesian Model Averaging (BMA) provides a statistically consistent framework for weighing the models based on their likelihood given the available observations. These weights are used to provide probability distribution functions (pdfs) for model projections. We develop a BMA framework for constructing probabilistic climate projections for spatially-averaged variables from the NARCliM project. The first step in the procedure is smoothing model output in order to exclude the influence of internal climate variability. Our statistical model for model-observations residuals is a homoskedastic iid process. Comparing RCMs with Australian Water Availability Project (AWAP) observations is used to determine model weights through Monte Carlo integration. Posterior pdfs of statistical parameters of model-data residuals are obtained using Markov Chain Monte Carlo. The uncertainty in the properties of the model-data residuals is fully accounted for when constructing the projections. We present the preliminary results of the BMA analysis for yearly maximum temperature for New South Wales state planning regions for the period 2060-2079.
Using a multinomial tree model for detecting mixtures in perceptual detection
Chechile, Richard A.
2014-01-01
In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.
Discrete ordinates-Monte Carlo coupling: A comparison of techniques in NERVA radiation analysis
NASA Technical Reports Server (NTRS)
Lindstrom, D. G.; Normand, E.; Wilcox, A. D.
1972-01-01
In the radiation analysis of the NERVA nuclear rocket system, two-dimensional discrete ordinates calculations are sufficient to provide detail in the pressure vessel and reactor assembly. Other parts of the system, however, require three-dimensional Monte Carlo analyses. To use these two methods in a single analysis, a means of coupling was developed whereby the results of a discrete ordinates calculation can be used to produce source data for a Monte Carlo calculation. Several techniques for producing source detail were investigated. Results of calculations on the NERVA system are compared and limitations and advantages of the coupling techniques discussed.
Yoshihara, Keisuke; Le, Minh Nhat; Nagasawa, Koo; Tsukagoshi, Hiroyuki; Nguyen, Hien Anh; Toizumi, Michiko; Moriuchi, Hiroyuki; Hashizume, Masahiro; Ariyoshi, Koya; Dang, Duc Anh; Kimura, Hirokazu; Yoshida, Lay-Myint
2016-11-01
We performed molecular evolutionary analyses of the G gene C-terminal 3rd hypervariable region of RSV-A genotypes NA1 and ON1 strains from the paediatric acute respiratory infection patients in central Vietnam during the 2010-2012 study period. Time-scaled phylogenetic analyses were performed using Bayesian Markov Chain Monte Carlo (MCMC) method, and pairwise distances (p-distances) were calculated. Bayesian Skyline Plot (BSP) was constructed to analyze the time-trend relative genetic diversity of central Vietnam RSV-A strains. We also estimated the N-glycosylation sites within G gene hypervariable region. Amino acid substitutions under positive and negative selection pressure were examined using Conservative Single Likelihood Ancestor Counting (SLAC), Fixed Effects Likelihood (FEL), Internal Fixed Effects Likelihood (IFEL) and Mixed Effects Model for Episodic Diversifying Selection (MEME) models. The majority of central Vietnam ON1 strains detected in 2012 were classified into lineage 1 with few positively selected substitutions. As for the Vietnamese NA1 strains, four lineages were circulating during the study period with a few positive selection sites. Shifting patterns of the predominantly circulating NA1 lineage were observed in each year during the investigation period. Median p-distance of central Vietnam NA1 strains was wider (p-distance=0.028) than that of ON1 (p-distance=0.012). The molecular evolutionary rate of central Vietnam ON1 strains was estimated to be 2.55×10 -2 (substitutions/site/year) and was faster than NA1 (7.12×10 -3 (substitutions/site/year)). Interestingly, the evolutionary rates of both genotypes ON1 and NA1 strains from central Vietnam were faster than the global strains respectively. Furthermore, the shifts of N-glycosylation pattern within the G gene 3rd hypervariable region of Vietnamese NA1 strains were observed in each year. BSP analysis indicated the rapid growth of RSV-A effective population size in early 2012. These results suggested that the molecular evolution of RSV-A G gene detected in central Vietnam was fast with unique evolutionary dynamics. Copyright © 2016 Elsevier B.V. All rights reserved.
Hurdle models for multilevel zero-inflated data via h-likelihood.
Molas, Marek; Lesaffre, Emmanuel
2010-12-30
Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.
Bhadra, Dhiman; Daniels, Michael J.; Kim, Sungduk; Ghosh, Malay; Mukherjee, Bhramar
2014-01-01
In a typical case-control study, exposure information is collected at a single time-point for the cases and controls. However, case-control studies are often embedded in existing cohort studies containing a wealth of longitudinal exposure history on the participants. Recent medical studies have indicated that incorporating past exposure history, or a constructed summary measure of cumulative exposure derived from the past exposure history, when available, may lead to more precise and clinically meaningful estimates of the disease risk. In this paper, we propose a flexible Bayesian semiparametric approach to model the longitudinal exposure profiles of the cases and controls and then use measures of cumulative exposure based on a weighted integral of this trajectory in the final disease risk model. The estimation is done via a joint likelihood. In the construction of the cumulative exposure summary, we introduce an influence function, a smooth function of time to characterize the association pattern of the exposure profile on the disease status with different time windows potentially having differential influence/weights. This enables us to analyze how the present disease status of a subject is influenced by his/her past exposure history conditional on the current ones. The joint likelihood formulation allows us to properly account for uncertainties associated with both stages of the estimation process in an integrated manner. Analysis is carried out in a hierarchical Bayesian framework using Reversible jump Markov chain Monte Carlo (RJMCMC) algorithms. The proposed methodology is motivated by, and applied to a case-control study of prostate cancer where longitudinal biomarker information is available for the cases and controls. PMID:22313248
NASA Astrophysics Data System (ADS)
Pedretti, Daniele; Mayer, K. Ulrich; Beckie, Roger D.
2017-06-01
In mining environmental applications, it is important to assess water quality from waste rock piles (WRPs) and estimate the likelihood of acid rock drainage (ARD) over time. The mineralogical heterogeneity of WRPs is a source of uncertainty in this assessment, undermining the reliability of traditional bulk indicators used in the industry. We focused in this work on the bulk neutralizing potential ratio (NPR), which is defined as the ratio of the content of non-acid-generating minerals (typically reactive carbonates such as calcite) to the content of potentially acid-generating minerals (typically sulfides such as pyrite). We used a streamtube-based Monte-Carlo method to show why and to what extent bulk NPR can be a poor indicator of ARD occurrence. We simulated ensembles of WRPs identical in their geometry and bulk NPR, which only differed in their initial distribution of the acid generating and acid neutralizing minerals that control NPR. All models simulated the same principal acid-producing, acid-neutralizing and secondary mineral forming processes. We show that small differences in the distribution of local NPR values or the number of flow paths that generate acidity strongly influence drainage pH. The results indicate that the likelihood of ARD (epitomized by the probability of occurrence of pH< 4 in a mixing boundary) within the first 100 years can be as high as 75% for a NPR = 2 and 40% for NPR = 4. The latter is traditionally considered as a ;universally safe; threshold to ensure non-acidic waters in practical applications. Our results suggest that new methods that explicitly account for mineralogical heterogeneity must be sought when computing effective (upscaled) NPR values at the scale of the piles.
Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B
2016-11-01
As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, S. A. M.; Ansbacher, W.; Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6
2013-01-15
Purpose: Acuros external beam (Acuros XB) is a novel dose calculation algorithm implemented through the ECLIPSE treatment planning system. The algorithm finds a deterministic solution to the linear Boltzmann transport equation, the same equation commonly solved stochastically by Monte Carlo methods. This work is an evaluation of Acuros XB, by comparison with Monte Carlo, for dose calculation applications involving high-density materials. Existing non-Monte Carlo clinical dose calculation algorithms, such as the analytic anisotropic algorithm (AAA), do not accurately model dose perturbations due to increased electron scatter within high-density volumes. Methods: Acuros XB, AAA, and EGSnrc based Monte Carlo are usedmore » to calculate dose distributions from 18 MV and 6 MV photon beams delivered to a cubic water phantom containing a rectangular high density (4.0-8.0 g/cm{sup 3}) volume at its center. The algorithms are also used to recalculate a clinical prostate treatment plan involving a unilateral hip prosthesis, originally evaluated using AAA. These results are compared graphically and numerically using gamma-index analysis. Radio-chromic film measurements are presented to augment Monte Carlo and Acuros XB dose perturbation data. Results: Using a 2% and 1 mm gamma-analysis, between 91.3% and 96.8% of Acuros XB dose voxels containing greater than 50% the normalized dose were in agreement with Monte Carlo data for virtual phantoms involving 18 MV and 6 MV photons, stainless steel and titanium alloy implants and for on-axis and oblique field delivery. A similar gamma-analysis of AAA against Monte Carlo data showed between 80.8% and 87.3% agreement. Comparing Acuros XB and AAA evaluations of a clinical prostate patient plan involving a unilateral hip prosthesis, Acuros XB showed good overall agreement with Monte Carlo while AAA underestimated dose on the upstream medial surface of the prosthesis due to electron scatter from the high-density material. Film measurements support the dose perturbations demonstrated by Monte Carlo and Acuros XB data. Conclusions: Acuros XB is shown to perform as well as Monte Carlo methods and better than existing clinical algorithms for dose calculations involving high-density volumes.« less
NASA Astrophysics Data System (ADS)
Abhinav, S.; Manohar, C. S.
2018-03-01
The problem of combined state and parameter estimation in nonlinear state space models, based on Bayesian filtering methods, is considered. A novel approach, which combines Rao-Blackwellized particle filters for state estimation with Markov chain Monte Carlo (MCMC) simulations for parameter identification, is proposed. In order to ensure successful performance of the MCMC samplers, in situations involving large amount of dynamic measurement data and (or) low measurement noise, the study employs a modified measurement model combined with an importance sampling based correction. The parameters of the process noise covariance matrix are also included as quantities to be identified. The study employs the Rao-Blackwellization step at two stages: one, associated with the state estimation problem in the particle filtering step, and, secondly, in the evaluation of the ratio of likelihoods in the MCMC run. The satisfactory performance of the proposed method is illustrated on three dynamical systems: (a) a computational model of a nonlinear beam-moving oscillator system, (b) a laboratory scale beam traversed by a loaded trolley, and (c) an earthquake shake table study on a bending-torsion coupled nonlinear frame subjected to uniaxial support motion.
Cross-Border Sexual Transmission of the Newly Emerging HIV-1 Clade CRF51_01B
Cheong, Hui Ting; Ng, Kim Tien; Ong, Lai Yee; Chook, Jack Bee; Chan, Kok Gan; Takebe, Yutaka; Kamarulzaman, Adeeba; Tee, Kok Keng
2014-01-01
A novel HIV-1 recombinant clade (CRF51_01B) was recently identified among men who have sex with men (MSM) in Singapore. As cases of sexually transmitted HIV-1 infection increase concurrently in two socioeconomically intimate countries such as Malaysia and Singapore, cross transmission of HIV-1 between said countries is highly probable. In order to investigate the timeline for the emergence of HIV-1 CRF51_01B in Singapore and its possible introduction into Malaysia, 595 HIV-positive subjects recruited in Kuala Lumpur from 2008 to 2012 were screened. Phylogenetic relationship of 485 amplified polymerase gene sequences was determined through neighbour-joining method. Next, near-full length sequences were amplified for genomic sequences inferred to be CRF51_01B and subjected to further analysis implemented through Bayesian Markov chain Monte Carlo (MCMC) sampling and maximum likelihood methods. Based on the near full length genomes, two isolates formed a phylogenetic cluster with CRF51_01B sequences of Singapore origin, sharing identical recombination structure. Spatial and temporal information from Bayesian MCMC coalescent and maximum likelihood analysis of the protease, gp120 and gp41 genes suggest that Singapore is probably the country of origin of CRF51_01B (as early as in the mid-1990s) and featured a Malaysian who acquired the infection through heterosexual contact as host for its ancestral lineages. CRF51_01B then spread rapidly among the MSM in Singapore and Malaysia. Although the importation of CRF51_01B from Singapore to Malaysia is supported by coalescence analysis, the narrow timeframe of the transmission event indicates a closely linked epidemic. Discrepancies in the estimated divergence times suggest that CRF51_01B may have arisen through multiple recombination events from more than one parental lineage. We report the cross transmission of a novel CRF51_01B lineage between countries that involved different sexual risk groups. Understanding the cross-border transmission of HIV-1 involving sexual networks is crucial for effective intervention strategies in the region. PMID:25340817
Cross-border sexual transmission of the newly emerging HIV-1 clade CRF51_01B.
Cheong, Hui Ting; Ng, Kim Tien; Ong, Lai Yee; Chook, Jack Bee; Chan, Kok Gan; Takebe, Yutaka; Kamarulzaman, Adeeba; Tee, Kok Keng
2014-01-01
A novel HIV-1 recombinant clade (CRF51_01B) was recently identified among men who have sex with men (MSM) in Singapore. As cases of sexually transmitted HIV-1 infection increase concurrently in two socioeconomically intimate countries such as Malaysia and Singapore, cross transmission of HIV-1 between said countries is highly probable. In order to investigate the timeline for the emergence of HIV-1 CRF51_01B in Singapore and its possible introduction into Malaysia, 595 HIV-positive subjects recruited in Kuala Lumpur from 2008 to 2012 were screened. Phylogenetic relationship of 485 amplified polymerase gene sequences was determined through neighbour-joining method. Next, near-full length sequences were amplified for genomic sequences inferred to be CRF51_01B and subjected to further analysis implemented through Bayesian Markov chain Monte Carlo (MCMC) sampling and maximum likelihood methods. Based on the near full length genomes, two isolates formed a phylogenetic cluster with CRF51_01B sequences of Singapore origin, sharing identical recombination structure. Spatial and temporal information from Bayesian MCMC coalescent and maximum likelihood analysis of the protease, gp120 and gp41 genes suggest that Singapore is probably the country of origin of CRF51_01B (as early as in the mid-1990s) and featured a Malaysian who acquired the infection through heterosexual contact as host for its ancestral lineages. CRF51_01B then spread rapidly among the MSM in Singapore and Malaysia. Although the importation of CRF51_01B from Singapore to Malaysia is supported by coalescence analysis, the narrow timeframe of the transmission event indicates a closely linked epidemic. Discrepancies in the estimated divergence times suggest that CRF51_01B may have arisen through multiple recombination events from more than one parental lineage. We report the cross transmission of a novel CRF51_01B lineage between countries that involved different sexual risk groups. Understanding the cross-border transmission of HIV-1 involving sexual networks is crucial for effective intervention strategies in the region.
Study of multi-dimensional radiative energy transfer in molecular gases
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, S. N.
1993-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.
Alberti, Luca; Colombo, Loris; Formentin, Giovanni
2018-04-15
The Lombardy Region in Italy is one of the most urbanized and industrialized areas in Europe. The presence of countless sources of groundwater pollution is therefore a matter of environmental concern. The sources of groundwater contamination can be classified into two different categories: 1) Point Sources (PS), which correspond to areas releasing plumes of high concentrations (i.e. hot-spots) and 2) Multiple-Point Sources (MPS) consisting in a series of unidentifiable small sources clustered within large areas, generating an anthropogenic diffuse contamination. The latter category frequently predominates in European Functional Urban Areas (FUA) and cannot be managed through standard remediation techniques, mainly because detecting the many different source areas releasing small contaminant mass in groundwater is unfeasible. A specific legislative action has been recently enacted at Regional level (DGR IX/3510-2012), in order to identify areas prone to anthropogenic diffuse pollution and their level of contamination. With a view to defining a management plan, it is necessary to find where MPS are most likely positioned. This paper describes a methodology devised to identify the areas with the highest likelihood to host potential MPS. A groundwater flow model was implemented for a pilot area located in the Milan FUA and through the PEST code, a Null-Space Monte Carlo method was applied in order to generate a suite of several hundred hydraulic conductivity field realizations, each maintaining the model in a calibrated state and each consistent with the modelers' expert-knowledge. Thereafter, the MODPATH code was applied to generate back-traced advective flowpaths for each of the models built using the conductivity field realizations. Maps were then created displaying the number of backtracked particles that crossed each model cell in each stochastic calibrated model. The result is considered to be representative of the FUAs areas with the highest likelihood to host MPS responsible for diffuse contamination. Copyright © 2017 Elsevier B.V. All rights reserved.
Bias correction for estimated QTL effects using the penalized maximum likelihood method.
Zhang, J; Yue, C; Zhang, Y-M
2012-04-01
A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vrugt, Jasper A; Robinson, Bruce A; Ter Braak, Cajo J F
In recent years, a strong debate has emerged in the hydrologic literature regarding what constitutes an appropriate framework for uncertainty estimation. Particularly, there is strong disagreement whether an uncertainty framework should have its roots within a proper statistical (Bayesian) context, or whether such a framework should be based on a different philosophy and implement informal measures and weaker inference to summarize parameter and predictive distributions. In this paper, we compare a formal Bayesian approach using Markov Chain Monte Carlo (MCMC) with generalized likelihood uncertainty estimation (GLUE) for assessing uncertainty in conceptual watershed modeling. Our formal Bayesian approach is implemented usingmore » the recently developed differential evolution adaptive metropolis (DREAM) MCMC scheme with a likelihood function that explicitly considers model structural, input and parameter uncertainty. Our results demonstrate that DREAM and GLUE can generate very similar estimates of total streamflow uncertainty. This suggests that formal and informal Bayesian approaches have more common ground than the hydrologic literature and ongoing debate might suggest. The main advantage of formal approaches is, however, that they attempt to disentangle the effect of forcing, parameter and model structural error on total predictive uncertainty. This is key to improving hydrologic theory and to better understand and predict the flow of water through catchments.« less
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Population Synthesis of Radio and Y-ray Normal, Isolated Pulsars Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2013-04-01
We present preliminary results of a population statistics study of normal pulsars (NP) from the Galactic disk using Markov Chain Monte Carlo techniques optimized according to two different methods. The first method compares the detected and simulated cumulative distributions of series of pulsar characteristics, varying the model parameters to maximize the overall agreement. The advantage of this method is that the distributions do not have to be binned. The other method varies the model parameters to maximize the log of the maximum likelihood obtained from the comparisons of four-two dimensional distributions of radio and γ-ray pulsar characteristics. The advantage of this method is that it provides a confidence region of the model parameter space. The computer code simulates neutron stars at birth using Monte Carlo procedures and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and γ-ray emission characteristics, implementing an empirical γ-ray luminosity model. A comparison group of radio NPs detected in ten-radio surveys is used to normalize the simulation, adjusting the model radio luminosity to match a birth rate. We include the Fermi pulsars in the forthcoming second pulsar catalog. We present preliminary results comparing the simulated and detected distributions of radio and γ-ray NPs along with a confidence region in the parameter space of the assumed models. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.
A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haeck, Wim; Parsons, Donald Kent; White, Morgan Curtis
Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in themore » details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.« less
Estimating Uncertainty in N2O Emissions from US Cropland Soils
USDA-ARS?s Scientific Manuscript database
A Monte Carlo analysis was combined with an empirically-based approach to quantify uncertainties in soil N2O emissions from US croplands estimated with the DAYCENT simulation model. Only a subset of croplands was simulated in the Monte Carlo analysis which was used to infer uncertainties across the ...
Analysis of Naval Ammunition Stock Positioning
2015-12-01
model takes once the Monte -Carlo simulation determines the assigned probabilities for site-to-site locations. Column two shows how the simulation...stockpiles and positioning them at coastal Navy facilities. A Monte -Carlo simulation model was developed to simulate expected cost and delivery...TERMS supply chain management, Monte -Carlo simulation, risk, delivery performance, stock positioning 15. NUMBER OF PAGES 85 16. PRICE CODE 17
ERIC Educational Resources Information Center
Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael
2017-01-01
The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…
Pattern Recognition for a Flight Dynamics Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; Hurtado, John E.
2011-01-01
The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis
NASA Astrophysics Data System (ADS)
White, J.; Brakefield, L. K.
2015-12-01
The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures
Theobald, Douglas L.; Wuttke, Deborah S.
2008-01-01
Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907
Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo
NASA Astrophysics Data System (ADS)
Qin, Junsong; Liu, Bingyi; Niu, Dongxiao
By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.
KSC Tech Transfer News, Volume 2, No. 2
NASA Technical Reports Server (NTRS)
Makufka, David (Editor); Dunn, Carol (Editor)
2009-01-01
This issue contains articles about: (1) the Innovative Partnerships Program (IPP) and the manager of the program, Alexis Hongamen, (2) New Technology Report (NTR) on a Monte Carlo Simulation to Estimate the Likelihood of Direct Lightning Strikes, (3) Kennedy Space Center's Applied Physics Lab, (4) a virtual ruler that is used for many applications, (5) a portable device that finds low-level leaks, (6) a sun-shield, that supports in-space cryogenic propellant storage, (7) lunar dust modeling software, (8) space based monitoring of radiation damage to DNA, (9) the use of light-emitting diode (LED) arrays vegetable production system, (10) Dust Tolerant Intelligent Electrical Connection Systems, (11) Ice Detection Camera System Upgrade, (12) Repair Techniques for Composite Structures, (13) Cryogenic Orbital Testbed, and (14) copyright protection.
Liu, Tao; Djordjevic, Ivan B
2014-12-29
In this paper, we first describe an optimal signal constellation design algorithm suitable for the coherent optical channels dominated by the linear phase noise. Then, we modify this algorithm to be suitable for the nonlinear phase noise dominated channels. In optimization procedure, the proposed algorithm uses the cumulative log-likelihood function instead of the Euclidian distance. Further, an LDPC coded modulation scheme is proposed to be used in combination with signal constellations obtained by proposed algorithm. Monte Carlo simulations indicate that the LDPC-coded modulation schemes employing the new constellation sets, obtained by our new signal constellation design algorithm, outperform corresponding QAM constellations significantly in terms of transmission distance and have better nonlinearity tolerance.
On Fitting a Multivariate Two-Part Latent Growth Model
Xu, Shu; Blozis, Shelley A.; Vandewater, Elizabeth A.
2017-01-01
A 2-part latent growth model can be used to analyze semicontinuous data to simultaneously study change in the probability that an individual engages in a behavior, and if engaged, change in the behavior. This article uses a Monte Carlo (MC) integration algorithm to study the interrelationships between the growth factors of 2 variables measured longitudinally where each variable can follow a 2-part latent growth model. A SAS macro implementing Mplus is developed to estimate the model to take into account the sampling uncertainty of this simulation-based computational approach. A sample of time-use data is used to show how maximum likelihood estimates can be obtained using a rectangular numerical integration method and an MC integration method. PMID:29333054
NASA Technical Reports Server (NTRS)
Abshire, James B.; Mcgarry, Jan F.
1987-01-01
Maximum-likelihood (ML) receivers are frequently used to optimize the timing performance of laser-ranging and laser-altimetry systems in the presence of shot and speckle noise. Monte Carlo method was used to examine ML-receiver performance with return signals in the 10-5000-photoelectron (pe) range. The simulations were performed for shot noise only and for shot and speckle noise. The results agree with previous theory for signal strengths greater than about 100 pe's but show that the theory can significantly underestimate timing errors for weaker received signals. Sharp high-bandwidth features in the detected signals are shown to improve timing performance only if their signal levels are greater than 4-5 pe's.
Statistical inferences with jointly type-II censored samples from two Pareto distributions
NASA Astrophysics Data System (ADS)
Abu-Zinadah, Hanaa H.
2017-08-01
In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.
Efficient estimation of Pareto model: Some modified percentile estimators.
Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali
2018-01-01
The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.
Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model
NASA Astrophysics Data System (ADS)
Zhang, Y.; Pohlmann, K.
2016-12-01
Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.
Modelling the spread of American foulbrood in honeybees
Datta, Samik; Bull, James C.; Budge, Giles E.; Keeling, Matt J.
2013-01-01
We investigate the spread of American foulbrood (AFB), a disease caused by the bacterium Paenibacillus larvae, that affects bees and can be extremely damaging to beehives. Our dataset comes from an inspection period carried out during an AFB epidemic of honeybee colonies on the island of Jersey during the summer of 2010. The data include the number of hives of honeybees, location and owner of honeybee apiaries across the island. We use a spatial SIR model with an underlying owner network to simulate the epidemic and characterize the epidemic using a Markov chain Monte Carlo (MCMC) scheme to determine model parameters and infection times (including undetected ‘occult’ infections). Likely methods of infection spread can be inferred from the analysis, with both distance- and owner-based transmissions being found to contribute to the spread of AFB. The results of the MCMC are corroborated by simulating the epidemic using a stochastic SIR model, resulting in aggregate levels of infection that are comparable to the data. We use this stochastic SIR model to simulate the impact of different control strategies on controlling the epidemic. It is found that earlier inspections result in smaller epidemics and a higher likelihood of AFB extinction. PMID:24026473
NASA Astrophysics Data System (ADS)
Villano, Michelangelo; Papathanassiou, Konstantinos P.
2011-03-01
The estimation of the local differential shift between synthetic aperture radar (SAR) images has proven to be an effective technique for monitoring glacier surface motion. As images acquired over glaciers by short wavelength SAR systems, such as TerraSAR-X, often suffer from a lack of coherence, image features have to be exploited for the shift estimation (feature-tracking).The present paper addresses feature-tracking with special attention to the feasibility requirements and the achievable accuracy of the shift estimation. In particular, the dependence of the performance on image characteristics, such as texture parameters, signal-to-noise ratio (SNR) and resolution, as well as on processing techniques (despeckling, normalised cross-correlation versus maximum likelihood estimation) is analysed by means of Monte-Carlo simulations. TerraSAR-X data acquired over the Helheim glacier, Greenland, and the Aletsch glacier, Switzerland, have been processed to validate the simulation results.Feature-tracking can benefit of the availability of fully-polarimetric data. As some image characteristics, in fact, are polarisation-dependent, the selection of an optimum polarisation leads to improved performance. Furthermore, fully-polarimetric SAR images can be despeckled without degrading the resolution, so that additional (smaller-scale) features can be exploited.
Feature Screening in Ultrahigh Dimensional Cox's Model.
Yang, Guangren; Yu, Ye; Li, Runze; Buu, Anne
Survival data with ultrahigh dimensional covariates such as genetic markers have been collected in medical studies and other fields. In this work, we propose a feature screening procedure for the Cox model with ultrahigh dimensional covariates. The proposed procedure is distinguished from the existing sure independence screening (SIS) procedures (Fan, Feng and Wu, 2010, Zhao and Li, 2012) in that the proposed procedure is based on joint likelihood of potential active predictors, and therefore is not a marginal screening procedure. The proposed procedure can effectively identify active predictors that are jointly dependent but marginally independent of the response without performing an iterative procedure. We develop a computationally effective algorithm to carry out the proposed procedure and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. That is, with the probability tending to one, the selected variable set includes the actual active predictors. We conduct Monte Carlo simulation to evaluate the finite sample performance of the proposed procedure and further compare the proposed procedure and existing SIS procedures. The proposed methodology is also demonstrated through an empirical analysis of a real data example.
An information-theoretic approach to the gravitational-wave burst detection problem
NASA Astrophysics Data System (ADS)
Katsavounidis, E.; Lynch, R.; Vitale, S.; Essick, R.; Robinet, F.
2016-03-01
The advanced era of gravitational-wave astronomy, with data collected in part by the LIGO gravitational-wave interferometers, has begun as of fall 2015. One potential type of detectable gravitational waves is short-duration gravitational-wave bursts, whose waveforms can be difficult to predict. We present the framework for a new detection algorithm - called oLIB - that can be used in relatively low-latency to turn calibrated strain data into a detection significance statement. This pipeline consists of 1) a sine-Gaussian matched-filter trigger generator based on the Q-transform - known as Omicron -, 2) incoherent down-selection of these triggers to the most signal-like set, and 3) a fully coherent analysis of this signal-like set using the Markov chain Monte Carlo (MCMC) Bayesian evidence calculator LALInferenceBurst (LIB). We optimally extract this information by using a likelihood-ratio test (LRT) to map these search statistics into a significance statement. Using representative archival LIGO data, we show that the algorithm can detect gravitational-wave burst events of realistic strength in realistic instrumental noise with good detection efficiencies across different burst waveform morphologies. With support from the National Science Foundation under Grant PHY-0757058.
SEMIPARAMETRIC ZERO-INFLATED MODELING IN MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA)
Liu, Hai; Ma, Shuangge; Kronmal, Richard; Chan, Kung-Sik
2013-01-01
We analyze the Agatston score of coronary artery calcium (CAC) from the Multi-Ethnic Study of Atherosclerosis (MESA) using semi-parametric zero-inflated modeling approach, where the observed CAC scores from this cohort consist of high frequency of zeroes and continuously distributed positive values. Both partially constrained and unconstrained models are considered to investigate the underlying biological processes of CAC development from zero to positive, and from small amount to large amount. Different from existing studies, a model selection procedure based on likelihood cross-validation is adopted to identify the optimal model, which is justified by comparative Monte Carlo studies. A shrinkaged version of cubic regression spline is used for model estimation and variable selection simultaneously. When applying the proposed methods to the MESA data analysis, we show that the two biological mechanisms influencing the initiation of CAC and the magnitude of CAC when it is positive are better characterized by an unconstrained zero-inflated normal model. Our results are significantly different from those in published studies, and may provide further insights into the biological mechanisms underlying CAC development in human. This highly flexible statistical framework can be applied to zero-inflated data analyses in other areas. PMID:23805172
Detection strategies for extreme mass ratio inspirals
NASA Astrophysics Data System (ADS)
Cornish, Neil J.
2011-05-01
The capture of compact stellar remnants by galactic black holes provides a unique laboratory for exploring the near-horizon geometry of the Kerr spacetime, or possible departures from general relativity if the central cores prove not to be black holes. The gravitational radiation produced by these extreme mass ratio inspirals (EMRIs) encodes a detailed map of the black hole geometry, and the detection and characterization of these signals is a major scientific goal for the LISA mission. The waveforms produced are very complex, and the signals need to be coherently tracked for tens of thousands of cycles to produce a detection, making EMRI signals one of the most challenging data analysis problems in all of gravitational wave astronomy. Estimates for the number of templates required to perform an exhaustive grid-based matched-filter search for these signals are astronomically large, and far out of reach of current computational resources. Here I describe an alternative approach that employs a hybrid between genetic algorithms and Markov chain Monte Carlo techniques, along with several time-saving techniques for computing the likelihood function. This approach has proven effective at the blind extraction of relatively weak EMRI signals from simulated LISA data sets.
NASA Astrophysics Data System (ADS)
Bin, Che; Ruoying, Yu; Dongsheng, Dang; Xiangyan, Wang
2017-05-01
Distributed Generation (DG) integrating to the network would cause the harmonic pollution which would cause damages on electrical devices and affect the normal operation of power system. On the other hand, due to the randomness of the wind and solar irradiation, the output of DG is random, too, which leads to an uncertainty of the harmonic generated by the DG. Thus, probabilistic methods are needed to analyse the impacts of the DG integration. In this work we studied the harmonic voltage probabilistic distribution and the harmonic distortion in distributed network after the distributed photovoltaic (DPV) system integrating in different weather conditions, mainly the sunny day, cloudy day, rainy day and the snowy day. The probabilistic distribution function of the DPV output power in different typical weather conditions could be acquired via the parameter identification method of maximum likelihood estimation. The Monte-Carlo simulation method was adopted to calculate the probabilistic distribution of harmonic voltage content at different frequency orders as well as the harmonic distortion (THD) in typical weather conditions. The case study was based on the IEEE33 system and the results of harmonic voltage content probabilistic distribution as well as THD in typical weather conditions were compared.
NASA Astrophysics Data System (ADS)
Ramirez, A. L.; Foxall, W.
2011-12-01
Surface displacements caused by reservoir pressure perturbations resulting from CO2 injection can often be measured by geodetic methods such as InSAR, tilt and GPS. We have developed a Markov Chain Monte Carlo (MCMC) approach to invert surface displacements measured by InSAR to map the pressure distribution associated with CO2 injection at the In Salah Krechba field, Algeria. The MCMC inversion entails sampling the solution space by proposing a series of trial 3D pressure-plume models. In the case of In Salah, the range of allowable models is constrained by prior information provided by well and geophysical data for the reservoir and possible fluid pathways in the overburden, and injection pressures and volumes. Each trial pressure distribution source is run through a (mathematical) forward model to calculate a set of synthetic surface deformation data. The likelihood that a particular proposal represents the true source is determined from the fit of the calculated data to the InSAR measurements, and those having higher likelihoods are passed to the posterior distribution. This procedure is repeated over typically ~104 - 105 trials until the posterior distribution converges to a stable solution. The solution to each stochastic inversion is in the form of Bayesian posterior probability density function (pdf) over the range of the alternative models that are consistent with the measured data and prior information. Therefore, the solution provides not only the highest likelihood model but also a realistic estimate of the solution uncertainty. Our InSalah work considered three flow model alternatives: 1) The first model assumed that the CO2 saturation and fluid pressure changes were confined to the reservoir; 2) the second model allowed the perturbations to occur also in a damage zone inferred in the lower caprock from 3D seismic surveys; and 3) the third model allowed fluid pressure changes anywhere within the reservoir and overburden. Alternative (2) yielded optimal fits to the data in inversions of InSAR data collected in 2007. The results indicate that pressure changes developed near the injection well and then penetrated into the lower caprock along the postulated damage zone. As in many geophysical inverse problems, inversion of surface displacement data for subsurface sources of deformation is inherently uncertain and non-unique. We will also discuss the approach used to characterize solution uncertainty. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
PAMLX: a graphical user interface for PAML.
Xu, Bo; Yang, Ziheng
2013-12-01
This note announces pamlX, a graphical user interface/front end for the paml (for Phylogenetic Analysis by Maximum Likelihood) program package (Yang Z. 1997. PAML: a program package for phylogenetic analysis by maximum likelihood. Comput Appl Biosci. 13:555-556; Yang Z. 2007. PAML 4: Phylogenetic analysis by maximum likelihood. Mol Biol Evol. 24:1586-1591). pamlX is written in C++ using the Qt library and communicates with paml programs through files. It can be used to create, edit, and print control files for paml programs and to launch paml runs. The interface is available for free download at http://abacus.gene.ucl.ac.uk/software/paml.html.
NASA Astrophysics Data System (ADS)
Rose, Michael Benjamin
A novel trajectory and attitude control and navigation analysis tool for powered ascent is developed. The tool is capable of rapid trade-space analysis and is designed to ultimately reduce turnaround time for launch vehicle design, mission planning, and redesign work. It is streamlined to quickly determine trajectory and attitude control dispersions, propellant dispersions, orbit insertion dispersions, and navigation errors and their sensitivities to sensor errors, actuator execution uncertainties, and random disturbances. The tool is developed by applying both Monte Carlo and linear covariance analysis techniques to a closed-loop, launch vehicle guidance, navigation, and control (GN&C) system. The nonlinear dynamics and flight GN&C software models of a closed-loop, six-degree-of-freedom (6-DOF), Monte Carlo simulation are formulated and developed. The nominal reference trajectory (NRT) for the proposed lunar ascent trajectory is defined and generated. The Monte Carlo truth models and GN&C algorithms are linearized about the NRT, the linear covariance equations are formulated, and the linear covariance simulation is developed. The performance of the launch vehicle GN&C system is evaluated using both Monte Carlo and linear covariance techniques and their trajectory and attitude control dispersion, propellant dispersion, orbit insertion dispersion, and navigation error results are validated and compared. Statistical results from linear covariance analysis are generally within 10% of Monte Carlo results, and in most cases the differences are less than 5%. This is an excellent result given the many complex nonlinearities that are embedded in the ascent GN&C problem. Moreover, the real value of this tool lies in its speed, where the linear covariance simulation is 1036.62 times faster than the Monte Carlo simulation. Although the application and results presented are for a lunar, single-stage-to-orbit (SSTO), ascent vehicle, the tools, techniques, and mathematical formulations that are discussed are applicable to ascent on Earth or other planets as well as other rocket-powered systems such as sounding rockets and ballistic missiles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Host, Ole; Lahav, Ofer; Abdalla, Filipe B.
We present a showcase for deriving bounds on the neutrino masses from laboratory experiments and cosmological observations. We compare the frequentist and Bayesian bounds on the effective electron neutrino mass m{sub {beta}} which the KATRIN neutrino mass experiment is expected to obtain, using both an analytical likelihood function and Monte Carlo simulations of KATRIN. Assuming a uniform prior in m{sub {beta}}, we find that a null result yields an upper bound of about 0.17 eV at 90% confidence in the Bayesian analysis, to be compared with the frequentist KATRIN reference value of 0.20 eV. This is a significant difference whenmore » judged relative to the systematic and statistical uncertainties of the experiment. On the other hand, an input m{sub {beta}}=0.35 eV, which is the KATRIN 5{sigma} detection threshold, would be detected at virtually the same level. Finally, we combine the simulated KATRIN results with cosmological data in the form of present (post-WMAP) and future (simulated Planck) observations. If an input of m{sub {beta}}=0.2 eV is assumed in our simulations, KATRIN alone excludes a zero neutrino mass at 2.2{sigma}. Adding Planck data increases the probability of detection to a median 2.7{sigma}. The analysis highlights the importance of combining cosmological and laboratory data on an equal footing.« less
Kotirum, Surachai; Chongmelaxme, Bunchai; Chaiyakunapruk, Nathorn
2017-02-01
To analyze the cost-utility of oral dabigatran etexilate, enoxaparin sodium injection, and no intervention for venous thromboembolism (VTE) prophylaxis after total hip or knee replacement (THR/TKR) surgery among Thai patients. A cost-utility analysis using a decision tree model was conducted using societal and healthcare payers' perspectives to simulate relevant costs and health outcomes covering a 3-month time horizon. Costs were adjusted to year 2014. The willingness-to-pay threshold of THB 160,000 (USD 4926) was used. One-way sensitivity and probabilistic sensitivity analyses using a Monte Carlo simulation were performed. Compared with no VTE prophylaxis, dabigatran and enoxaparin after THR and TKR surgery incurred higher costs and increased quality adjusted life years (QALYs). However, their incremental cost-effectiveness ratios were high above the willingness to pay. Compared with enoxaparin, dabigatran for THR/TKR lowered VTE complications but increased bleeding cases; dabigatran was cost-saving by reducing the costs [by THB 3809.96 (USD 117.30) for THR] and producing more QALYs gained (by 0.00013 for THR). Dabigatran (vs. enoxaparin) had a 98 % likelihood of being cost effective. Dabigatran is cost-saving compared to enoxaparin for VTE prophylaxis after THR or TKR under the Thai context. However, both medications are not cost-effective compared to no thromboprophylaxis.
NASA Astrophysics Data System (ADS)
Protassov, R.; van Dyk, D.; Connors, A.; Kashyap, V.; Siemiginowska, A.
2000-12-01
We examine the x-ray spectrum of the afterglow of GRB 970508, analyzed for Fe line emission by Piro et al (1999, ApJL, 514, L73). This is a difficult and extremely important measurement: the detection of x-ray afterglows from γ -ray bursts is at best a tricky business, relying on near-real satellite time response to unpredictable events; and a great deal of luck in catching a burst bright enough for a useful spectral analysis. Detecting a clear atomic (or cyclotron) line in the generally smooth and featureless afterglow (or burst) emission not only gives one of the few very specific keys to the physics local to the emission region, but also provides clues or confirmation of its distance (via redshift). Unfortunately, neither the likelihood ratio test or the related F-statistic commonly used to detect spectral lines adhere to their nominal Chi square and F-distributions. Thus we begin by calibrating the F-statistic used in Piro et al (1999, ApJL, 514, L73) via a simulation study. The simulation study relies on a completely specified source model, i.e. we do Monte Carlo simulations with all model parameters fixed (so--called ``parametric bootstrapping''). Second, we employ the method of posterior predictive p-values to calibrate a LRT statistic while accounting for the uncertainty in the parameters of the source model. Our analysis reveals evidence for the Fe K line.
MUSiC—An Automated Scan for Deviations between Data and Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Meyer, Arnd
2010-02-01
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Arnd
2010-02-10
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
Robust Multipoint Water-Fat Separation Using Fat Likelihood Analysis
Yu, Huanzhou; Reeder, Scott B.; Shimakawa, Ann; McKenzie, Charles A.; Brittain, Jean H.
2016-01-01
Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. PMID:21842498
Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny
2011-01-01
Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
A Study of Item Bias for Attitudinal Measurement Using Maximum Likelihood Factor Analysis.
ERIC Educational Resources Information Center
Mayberry, Paul W.
A technique for detecting item bias that is responsive to attitudinal measurement considerations is a maximum likelihood factor analysis procedure comparing multivariate factor structures across various subpopulations, often referred to as SIFASP. The SIFASP technique allows for factorial model comparisons in the testing of various hypotheses…
NASA Astrophysics Data System (ADS)
Shafii, M.; Tolson, B.; Matott, L. S.
2012-04-01
Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.
NASA Astrophysics Data System (ADS)
Gaál, Ladislav; Szolgay, Ján.; Bacigál, Tomáå.¡; Kohnová, Silvia
2010-05-01
Copula-based estimation methods of hydro-climatological extremes have increasingly been gaining attention of researchers and practitioners in the last couple of years. Unlike the traditional estimation methods which are based on bivariate cumulative distribution functions (CDFs), copulas are a relatively flexible tool of statistics that allow for modelling dependencies between two or more variables such as flood peaks and flood volumes without making strict assumptions on the marginal distributions. The dependence structure and the reliability of the joint estimates of hydro-climatological extremes, mainly in the right tail of the joint CDF not only depends on the particular copula adopted but also on the data available for the estimation of the marginal distributions of the individual variables. Generally, data samples for frequency modelling have limited temporal extent, which is a considerable drawback of frequency analyses in practice. Therefore, it is advised to deal with statistical methods that improve any part of the process of copula construction and result in more reliable design values of hydrological variables. The scarcity of the data sample mostly in the extreme tail of the joint CDF can be bypassed, e.g., by using a considerably larger amount of simulated data by rainfall-runoff analysis or by including historical information on the variables under study. The latter approach of data extension is used here to make the quantile estimates of the individual marginals of the copula more reliable. In the presented paper it is proposed to use historical information in the frequency analysis of the marginal distributions in the framework of Bayesian Monte Carlo Markov Chain (MCMC) simulations. Generally, a Bayesian approach allows for a straightforward combination of different sources of information on floods (e.g. flood data from systematic measurements and historical flood records, respectively) in terms of a product of the corresponding likelihood functions. On the other hand, the MCMC algorithm is a numerical approach for sampling from the likelihood distributions. The Bayesian MCMC methods therefore provide an attractive way to estimate the uncertainty in parameters and quantile metrics of frequency distributions. The applicability of the method is demonstrated in a case study of the hydroelectric power station Orlík on the Vltava River. This site has a key role in the flood prevention of Prague, the capital city of the Czech Republic. The record length of the available flood data is 126 years from the period 1877-2002, while the flood event observed in 2002 that caused extensive damages and numerous casualties is treated as a historic one. To estimate the joint probabilities of flood peaks and volumes, different copulas are fitted and their goodness-of-fit are evaluated by bootstrap simulations. Finally, selected quantiles of flood volumes conditioned on given flood peaks are derived and compared with those obtained by the traditional method used in the practice of water management specialists of the Vltava River.
NASA Astrophysics Data System (ADS)
Devendran, A. A.; Lakshmanan, G.
2014-11-01
Data quality for GIS processing and analysis is becoming an increased concern due to the accelerated application of GIS technology for problem solving and decision making roles. Uncertainty in the geographic representation of the real world arises as these representations are incomplete. Identification of the sources of these uncertainties and the ways in which they operate in GIS based representations become crucial in any spatial data representation and geospatial analysis applied to any field of application. This paper reviews the articles on the various components of spatial data quality and various uncertainties inherent in them and special focus is paid to two fields of application such as Urban Simulation and Hydrological Modelling. Urban growth is a complicated process involving the spatio-temporal changes of all socio-economic and physical components at different scales. Cellular Automata (CA) model is one of the simulation models, which randomly selects potential cells for urbanisation and the transition rules evaluate the properties of the cell and its neighbour. Uncertainty arising from CA modelling is assessed mainly using sensitivity analysis including Monte Carlo simulation method. Likewise, the importance of hydrological uncertainty analysis has been emphasized in recent years and there is an urgent need to incorporate uncertainty estimation into water resources assessment procedures. The Soil and Water Assessment Tool (SWAT) is a continuous time watershed model to evaluate various impacts of land use management and climate on hydrology and water quality. Hydrological model uncertainties using SWAT model are dealt primarily by Generalized Likelihood Uncertainty Estimation (GLUE) method.
Derenzo, Stephen E
2017-01-01
This paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Another Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm3 scintillators: Lu2SiO5:Ce,Ca (LSO), LaBr3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. The examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values. PMID:28327464
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derenzo, Stephen E.
Here, this paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Anothermore » Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm 3 scintillators: Lu 2SiO 5 :Ce,Ca (LSO), LaBr 3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr 3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. Lastly, the examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values.« less
Derenzo, Stephen E.
2017-04-11
Here, this paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Anothermore » Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm 3 scintillators: Lu 2SiO 5 :Ce,Ca (LSO), LaBr 3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr 3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. Lastly, the examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values.« less
Drusano, G. L.; Preston, S. L.; Gotfried, M. H.; Danziger, L. H.; Rodvold, K. A.
2002-01-01
Levofloxacin was administered orally to steady state to volunteers randomly in doses of 500 and 750 mg. Plasma and epithelial lining fluid (ELF) samples were obtained at 4, 12, and 24 h after the final dose. All data were comodeled in a population pharmacokinetic analysis employing BigNPEM. Penetration was evaluated from the population mean parameter vector values and from the results of a 1,000-subject Monte Carlo simulation. Evaluation from the population mean values demonstrated a penetration ratio (ELF/plasma) of 1.16. The Monte Carlo simulation provided a measure of dispersion, demonstrating a mean ratio of 3.18, with a median of 1.43 and a 95% confidence interval of 0.14 to 19.1. Population analysis with Monte Carlo simulation provides the best and least-biased estimate of penetration. It also demonstrates clearly that we can expect differences in penetration between patients. This analysis did not deal with inflammation, as it was performed in volunteers. The influence of lung pathology on penetration needs to be examined. PMID:11796385
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen
2018-07-01
Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.
NASA Technical Reports Server (NTRS)
Hoffbeck, Joseph P.; Landgrebe, David A.
1994-01-01
Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.
Jiang, Shanghai
2017-01-01
X-ray fluorescence computed tomography (XFCT) based on sheet beam can save a huge amount of time to obtain a whole set of projections using synchrotron. However, it is clearly unpractical for most biomedical research laboratories. In this paper, polychromatic X-ray fluorescence computed tomography with sheet-beam geometry is tested by Monte Carlo simulation. First, two phantoms (A and B) filled with PMMA are used to simulate imaging process through GEANT 4. Phantom A contains several GNP-loaded regions with the same size (10 mm) in height and diameter but different Au weight concentration ranging from 0.3% to 1.8%. Phantom B contains twelve GNP-loaded regions with the same Au weight concentration (1.6%) but different diameter ranging from 1 mm to 9 mm. Second, discretized presentation of imaging model is established to reconstruct more accurate XFCT images. Third, XFCT images of phantoms A and B are reconstructed by filter back-projection (FBP) and maximum likelihood expectation maximization (MLEM) with and without correction, respectively. Contrast-to-noise ratio (CNR) is calculated to evaluate all the reconstructed images. Our results show that it is feasible for sheet-beam XFCT system based on polychromatic X-ray source and the discretized imaging model can be used to reconstruct more accurate images. PMID:28567054
NASA Astrophysics Data System (ADS)
Houska, T.; Multsch, S.; Kraft, P.; Frede, H.-G.; Breuer, L.
2014-04-01
Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures - for example, by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow for a more detailed analysis of the dynamic behaviour of the soil-plant interface. We coupled two of such high-process-oriented independent models and calibrated both models simultaneously. The catchment modelling framework (CMF) simulated soil hydrology based on the Richards equation and the van Genuchten-Mualem model of the soil hydraulic properties. CMF was coupled with the plant growth modelling framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo-based generalized likelihood uncertainty estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 × 106 model runs randomly drawn from a uniform distribution. The model was applied to three sites with different management in Müncheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matter of roots, storages, stems and leaves. The shape parameter of the retention curve n was highly constrained, whereas other parameters of the retention curve showed a large equifinality. We attribute this slightly poorer model performance to missing leaf senescence, which is currently not implemented in PMF. The most constrained parameters for the plant growth model were the radiation-use efficiency and the base temperature. Cross validation helped to identify deficits in the model structure, pointing out the need for including agricultural management options in the coupled model.
NASA Astrophysics Data System (ADS)
Bottaini, C.; Mirão, J.; Figuereido, M.; Candeias, A.; Brunetti, A.; Schiavon, N.
2015-01-01
Energy dispersive X-ray fluorescence (EDXRF) is a well-known technique for non-destructive and in situ analysis of archaeological artifacts both in terms of the qualitative and quantitative elemental composition because of its rapidity and non-destructiveness. In this study EDXRF and realistic Monte Carlo simulation using the X-ray Monte Carlo (XRMC) code package have been combined to characterize a Cu-based bowl from the Iron Age burial from Fareleira 3 (Southern Portugal). The artifact displays a multilayered structure made up of three distinct layers: a) alloy substrate; b) green oxidized corrosion patina; and c) brownish carbonate soil-derived crust. To assess the reliability of Monte Carlo simulation in reproducing the composition of the bulk metal of the objects without recurring to potentially damaging patina's and crust's removal, portable EDXRF analysis was performed on cleaned and patina/crust coated areas of the artifact. Patina has been characterized by micro X-ray Diffractometry (μXRD) and Back-Scattered Scanning Electron Microscopy + Energy Dispersive Spectroscopy (BSEM + EDS). Results indicate that the EDXRF/Monte Carlo protocol is well suited when a two-layered model is considered, whereas in areas where the patina + crust surface coating is too thick, X-rays from the alloy substrate are not able to exit the sample.
Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...
2017-03-05
Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.
Likelihood-Based Confidence Intervals in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Oort, Frans J.
2011-01-01
In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…
Vocational Qualifications, Employment Status and Income: 2006 Census Analysis. Technical Paper
ERIC Educational Resources Information Center
Daly, Anne
2011-01-01
Two features of the labour market for vocationally qualified workers are explored in this technical paper: the likelihood of self-employment versus wage employment and the determinants of income. The analysis showed that demographic, occupational and local labour market characteristics all influence the likelihood of self-employment. Self-employed…
High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm
ERIC Educational Resources Information Center
Cai, Li
2010-01-01
A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…
NASA Astrophysics Data System (ADS)
Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong
2016-08-01
A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.
Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee
2015-01-01
In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916
Minimization for conditional simulation: Relationship to optimal transport
NASA Astrophysics Data System (ADS)
Oliver, Dean S.
2014-05-01
In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.
Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations
NASA Astrophysics Data System (ADS)
Sandhu, Rimple; Poirel, Dominique; Pettit, Chris; Khalil, Mohammad; Sarkar, Abhijit
2016-07-01
A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid-structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic system leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib-Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.
A flexible spatial scan statistic with a restricted likelihood ratio for detecting disease clusters.
Tango, Toshiro; Takahashi, Kunihiko
2012-12-30
Spatial scan statistics are widely used tools for detection of disease clusters. Especially, the circular spatial scan statistic proposed by Kulldorff (1997) has been utilized in a wide variety of epidemiological studies and disease surveillance. However, as it cannot detect noncircular, irregularly shaped clusters, many authors have proposed different spatial scan statistics, including the elliptic version of Kulldorff's scan statistic. The flexible spatial scan statistic proposed by Tango and Takahashi (2005) has also been used for detecting irregularly shaped clusters. However, this method sets a feasible limitation of a maximum of 30 nearest neighbors for searching candidate clusters because of heavy computational load. In this paper, we show a flexible spatial scan statistic implemented with a restricted likelihood ratio proposed by Tango (2008) to (1) eliminate the limitation of 30 nearest neighbors and (2) to have surprisingly much less computational time than the original flexible spatial scan statistic. As a side effect, it is shown to be able to detect clusters with any shape reasonably well as the relative risk of the cluster becomes large via Monte Carlo simulation. We illustrate the proposed spatial scan statistic with data on mortality from cerebrovascular disease in the Tokyo Metropolitan area, Japan. Copyright © 2012 John Wiley & Sons, Ltd.
Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E
2012-03-01
In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandhu, Rimple; Poirel, Dominique; Pettit, Chris
2016-07-01
A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid–structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic systemmore » leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib–Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.« less
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
GPS Spoofing Attack Characterization and Detection in Smart Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blum, Rick S.; Pradhan, Parth; Nagananda, Kyatsandra
The problem of global positioning system (GPS) spoofing attacks on smart grids endowed with phasor measurement units (PMUs) is addressed, taking into account the dynamical behavior of the states of the system. First, it is shown how GPS spoofing introduces a timing synchronization error in the phasor readings recorded by the PMUs and alters the measurement matrix of the dynamical model. Then, a generalized likelihood ratio-based hypotheses testing procedure is devised to detect changes in the measurement matrix when the system is subjected to a spoofing attack. Monte Carlo simulations are performed on the 9-bus, 3-machine test grid to demonstratemore » the implication of the spoofing attack on dynamic state estimation and to analyze the performance of the proposed hypotheses test.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1979-10-01
The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical datamore » from the National Uranium Resource Evaluation Program.« less
On the short-term uncertainty in performance f a point absorber wave energy converter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coe, Ryan Geoffrey; Michelen, Carlos; Manuel, Lance
2016-03-01
Of interest, in this study, is the quantification of uncertainty in the performance of a two-body wave point absorber (Reference Model 3 or RM3), which serves as a wave energy converter (WEC). We demonstrate how simulation tools may be used to establish short-term relationships between any performance parameter of the WEC device and wave height in individual sea states. We demonstrate this methodology for two sea states. Efficient structural reliability methods, validated using more expensive Monte Carlo sampling, allow the estimation of uncertainty in performance of the device. Such methods, when combined with metocean data quantifying the likelihood of differentmore » sea states, can be useful in long-term studies and in reliability-based design.« less
Extending the Li&Ma method to include PSF information
NASA Astrophysics Data System (ADS)
Nievas-Rosillo, M.; Contreras, J. L.
2016-02-01
The so called Li&Ma formula is still the most frequently used method for estimating the significance of observations carried out by Imaging Atmospheric Cherenkov Telescopes. In this work a straightforward extension of the method for point sources that profits from the good imaging capabilities of current instruments is proposed. It is based on a likelihood ratio under the assumption of a well-known PSF and a smooth background. Its performance is tested with Monte Carlo simulations based on real observations and its sensitivity is compared to standard methods which do not incorporate PSF information. The gain of significance that can be attributed to the inclusion of the PSF is around 10% and can be boosted if a background model is assumed or a finer binning is used.
Salinero-Fort, Miguel Ángel; de Burgos-Lunar, Carmen; Mostaza Prieto, José; Lahoz Rallo, Carlos; Abánades-Herranz, Juan Carlos; Gómez-Campelo, Paloma; Laguna Cuesta, Fernando; Estirado De Cabo, Eva; García Iglesias, Francisca; González Alegre, Teresa; Fernández Puntero, Belén; Montesano Sánchez, Luis; Vicent López, David; Cornejo Del Río, Víctor; Fernández García, Pedro J; Sabín Rodríguez, Concesa; López López, Silvia; Patrón Barandío, Pedro
2015-01-01
Introduction The incidence of type 2 diabetes mellitus (T2DM) is increasing worldwide. When diagnosed, many patients already have organ damage or advance subclinical atherosclerosis. An early diagnosis could allow the implementation of lifestyle changes and treatment options aimed at delaying the progression of the disease and to avoid cardiovascular complications. Different scores for identifying undiagnosed diabetes have been reported, however, their performance in populations of southern Europe has not been sufficiently evaluated. The main objectives of our study are: to evaluate the screening performance and cut-off points of the main scores that identify the risk of undiagnosed T2DM and prediabetes in a Spanish population, and to develop and validate our own predictive models of undiagnosed T2DM (screening model), and future T2DM (prediction risk model) after 5-year follow-up. As a secondary objective, we will evaluate the atherosclerotic burden of the population with undiagnosed T2DM. Methods and analysis Population-based prospective cohort study with baseline screening, to evaluate the performance of the FINDRISC, DANISH, DESIR, ARIC and QDScore, against the gold standard tests: Fasting plasma glucose, oral glucose tolerance and/or HbA1c. The sample size will include 1352 participants between the ages of 45 and 74 years. Analysis: sensitivity, specificity, positive predictive value, negative predictive value, likelihood ratio positive, likelihood ratio negative and receiver operating characteristic curves and area under curve. Binary logistic regression for the first 700 individuals (derivation) and last 652 (validation) will be performed. All analyses will be calculated with their 95% CI; statistical significance will be p<0.05. Ethics and dissemination The study protocol has been approved by the Research Ethics Committee of the Carlos III Hospital (Madrid). The score performance and predictive model will be presented in medical conferences, workshops, seminars and round table discussions. Furthermore, the predictive model will be published in a peer-reviewed medical journal to further increase the exposure of the scores. PMID:26220868
MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD
A predictive screening model was developed for fate and transport
of viruses in the unsaturated zone. A database of input parameters
allowed Monte Carlo analysis with the model. The resulting kernel
densities of predicted attenuation during percolation indicated very ...
Application of the CO2-PENS risk analysis tool to the Rock Springs Uplift, Wyoming
Stauffer, P.H.; Pawar, R.J.; Surdam, R.C.; Jiao, Z.; Deng, H.; Lettelier, B.C.; Viswanathan, H.S.; Sanzo, D.L.; Keating, G.N.
2011-01-01
We describe preliminary application of the CO2-PENS performance and risk analysis tool to a planned geologic CO2 sequestration demonstration project in the Rock Springs Uplift (RSU), located in south western Wyoming. We use data from the RSU to populate CO2-PENS, an evolving system-level modeling tool developed at Los Alamos National Laboratory. This tool has been designed to generate performance and risk assessment calculations for the geologic sequestration of carbon dioxide. Our approach follows Systems Analysis logic and includes estimates of uncertainty in model parameters and Monte-Carlo simulations that lead to probabilistic results. Probabilistic results provide decision makers with a range in the likelihood of different outcomes. Herein we present results from a newly implemented approach in CO 2-PENS that captures site-specific spatially coherent details such as topography on the reservoir/cap-rock interface, changes in saturation and pressure during injection, and dip on overlying aquifers that may be impacted by leakage upward through wellbores and faults. We present simulations of CO 2 injection under different uncertainty distributions for hypothetical leaking wells and faults. Although results are preliminary and to be used only for demonstration of the approach, future results of the risk analysis will form the basis for a discussion on methods to reduce uncertainty in the risk calculations. Additionally, we present ideas on using the model to help locate monitoring equipment to detect potential leaks. By maintaining site-specific details in the CO2-PENS analysis we provide a tool that allows more logical presentations to stakeholders in the region. ?? 2011 Published by Elsevier Ltd.
Xu, Wei-Wei; Hu, Shen-Jiang; Wu, Tao
2017-07-01
Antithrombotic therapy using new oral anticoagulants (NOACs) in patients with atrial fibrillation (AF) has been generally shown to have a favorable risk-benefit profile. Since there has been dispute about the risks of gastrointestinal bleeding (GIB) and intracranial hemorrhage (ICH), we sought to conduct a systematic review and network meta-analysis using Bayesian inference to analyze the risks of GIB and ICH in AF patients taking NOACs. We analyzed data from 20 randomized controlled trials of 91 671 AF patients receiving anticoagulants, antiplatelet drugs, or placebo. Bayesian network meta-analysis of two different evidence networks was performed using a binomial likelihood model, based on a network in which different agents (and doses) were treated as separate nodes. Odds ratios (ORs) and 95% confidence intervals (CIs) were modeled using Markov chain Monte Carlo methods. Indirect comparisons with the Bayesian model confirmed that aspirin+clopidogrel significantly increased the risk of GIB in AF patients compared to the placebo (OR 0.33, 95% CI 0.01-0.92). Warfarin was identified as greatly increasing the risk of ICH compared to edoxaban 30 mg (OR 3.42, 95% CI 1.22-7.24) and dabigatran 110 mg (OR 3.56, 95% CI 1.10-8.45). We further ranked the NOACs for the lowest risk of GIB (apixaban 5 mg) and ICH (apixaban 5 mg, dabigatran 110 mg, and edoxaban 30 mg). Bayesian network meta-analysis of treatment of non-valvular AF patients with anticoagulants suggested that NOACs do not increase risks of GIB and/or ICH, compared to each other.
NASA Astrophysics Data System (ADS)
Gardner, Robin P.; Xu, Libai
2009-10-01
The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.
ANALYSIS OF METHODS FOR DETECTING THE PROXIMITY EFFECT IN QUASAR SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Aglio, Aldo; Gnedin, Nickolay Y., E-mail: adaglio@aip.d
Using numerical simulations of structure formation, we investigate several methods for determining the strength of the proximity effect in the H I Ly{alpha} forest. We analyze three high-resolution ({approx}10 kpc) redshift snapshots (z-bar=4,3, and 2.25) of a Hydro-Particle-Mesh simulation to obtain realistic absorption spectra of the H I Ly{alpha} forest. We model the proximity effect along the simulated sight lines with a simple analytical prescription based on the assumed quasar luminosity and the intensity of the cosmic UV background (UVB). We begin our analysis investigating the intrinsic biases thought to arise in the widely adopted standard technique of combining multiplemore » lines of sight when searching for the proximity effect. We confirm the existence of these biases, albeit smaller than previously predicted with simple Monte Carlo simulations. We then concentrate on the analysis of the proximity effect along individual lines of sight. After determining its strength with a fiducial value of the UVB intensity, we construct the proximity effect strength distribution (PESD). We confirm that the PESD inferred from the simple averaging technique accurately recovers the input strength of the proximity effect at all redshifts. Moreover, the PESD closely follows the behaviors found in observed samples of quasar spectra. However, the PESD obtained from our new simulated sight lines presents some differences to that of simple Monte Carlo simulations. At all redshifts, we find a smaller dispersion of the strength parameters, the source of the corresponding smaller biases found when combining multiple lines of sight. After developing three new theoretical methods for recovering the strength of the proximity effect on individual lines of sight, we compare their accuracy to the PESD from the simple averaging technique. All our new approaches are based on the maximization of the likelihood function, albeit invoking some modifications. The new techniques presented here, in spite of their complexity, fail to recover the input proximity effect in an unbiased way, presumably due to some (unknown) higher order correlations in the spectrum. Thus, employing complex three-dimensional simulations, we provide strong evidence in favor of the PESD obtained from the simple averaging technique, as a method of estimating the UVB intensity, free of any intrinsic biases.« less
Mi, Jian; Takahashi, Yasutake
2016-01-01
Radio frequency identification (RFID) technology has already been explored for efficient self-localization of indoor mobile robots. A mobile robot equipped with RFID readers detects passive RFID tags installed on the floor in order to locate itself. The Monte-Carlo localization (MCL) method enables the localization of a mobile robot equipped with an RFID system with reasonable accuracy, sufficient robustness and low computational cost. The arrangements of RFID readers and tags and the size of antennas are important design parameters for realizing accurate and robust self-localization using a low-cost RFID system. The design of a likelihood model of RFID tag detection is also crucial for the accurate self-localization. This paper presents a novel design and arrangement of RFID readers and tags for indoor mobile robot self-localization. First, by considering small-sized and large-sized antennas of an RFID reader, we show how the design of the likelihood model affects the accuracy of self-localization. We also design a novel likelihood model by taking into consideration the characteristics of the communication range of an RFID system with a large antenna. Second, we propose a novel arrangement of RFID tags with eight RFID readers, which results in the RFID system configuration requiring much fewer readers and tags while retaining reasonable accuracy of self-localization. We verify the performances of MCL-based self-localization realized using the high-frequency (HF)-band RFID system with eight RFID readers and a lower density of RFID tags installed on the floor based on MCL in simulated and real environments. The results of simulations and real environment experiments demonstrate that our proposed low-cost HF-band RFID system realizes accurate and robust self-localization of an indoor mobile robot. PMID:27483279
Mi, Jian; Takahashi, Yasutake
2016-07-29
Radio frequency identification (RFID) technology has already been explored for efficient self-localization of indoor mobile robots. A mobile robot equipped with RFID readers detects passive RFID tags installed on the floor in order to locate itself. The Monte-Carlo localization (MCL) method enables the localization of a mobile robot equipped with an RFID system with reasonable accuracy, sufficient robustness and low computational cost. The arrangements of RFID readers and tags and the size of antennas are important design parameters for realizing accurate and robust self-localization using a low-cost RFID system. The design of a likelihood model of RFID tag detection is also crucial for the accurate self-localization. This paper presents a novel design and arrangement of RFID readers and tags for indoor mobile robot self-localization. First, by considering small-sized and large-sized antennas of an RFID reader, we show how the design of the likelihood model affects the accuracy of self-localization. We also design a novel likelihood model by taking into consideration the characteristics of the communication range of an RFID system with a large antenna. Second, we propose a novel arrangement of RFID tags with eight RFID readers, which results in the RFID system configuration requiring much fewer readers and tags while retaining reasonable accuracy of self-localization. We verify the performances of MCL-based self-localization realized using the high-frequency (HF)-band RFID system with eight RFID readers and a lower density of RFID tags installed on the floor based on MCL in simulated and real environments. The results of simulations and real environment experiments demonstrate that our proposed low-cost HF-band RFID system realizes accurate and robust self-localization of an indoor mobile robot.
Lee, Jinseok; Chon, Ki H
2010-09-01
We present particle filtering (PF) algorithms for an accurate respiratory rate extraction from pulse oximeter recordings over a broad range: 12-90 breaths/min. These methods are based on an autoregressive (AR) model, where the aim is to find the pole angle with the highest magnitude as it corresponds to the respiratory rate. However, when SNR is low, the pole angle with the highest magnitude may not always lead to accurate estimation of the respiratory rate. To circumvent this limitation, we propose a probabilistic approach, using a sequential Monte Carlo method, named PF, which is combined with the optimal parameter search (OPS) criterion for an accurate AR model-based respiratory rate extraction. The PF technique has been widely adopted in many tracking applications, especially for nonlinear and/or non-Gaussian problems. We examine the performances of five different likelihood functions of the PF algorithm: the strongest neighbor, nearest neighbor (NN), weighted nearest neighbor (WNN), probability data association (PDA), and weighted probability data association (WPDA). The performance of these five combined OPS-PF algorithms was measured against a solely OPS-based AR algorithm for respiratory rate extraction from pulse oximeter recordings. The pulse oximeter data were collected from 33 healthy subjects with breathing rates ranging from 12 to 90 breaths/ min. It was found that significant improvement in accuracy can be achieved by employing particle filters, and that the combined OPS-PF employing either the NN or WNN likelihood function achieved the best results for all respiratory rates considered in this paper. The main advantage of the combined OPS-PF with either the NN or WNN likelihood function is that for the first time, respiratory rates as high as 90 breaths/min can be accurately extracted from pulse oximeter recordings.
Inferring epidemiological parameters from phylogenies using regression-ABC: A comparative study
Gascuel, Olivier
2017-01-01
Inferring epidemiological parameters such as the R0 from time-scaled phylogenies is a timely challenge. Most current approaches rely on likelihood functions, which raise specific issues that range from computing these functions to finding their maxima numerically. Here, we present a new regression-based Approximate Bayesian Computation (ABC) approach, which we base on a large variety of summary statistics intended to capture the information contained in the phylogeny and its corresponding lineage-through-time plot. The regression step involves the Least Absolute Shrinkage and Selection Operator (LASSO) method, which is a robust machine learning technique. It allows us to readily deal with the large number of summary statistics, while avoiding resorting to Markov Chain Monte Carlo (MCMC) techniques. To compare our approach to existing ones, we simulated target trees under a variety of epidemiological models and settings, and inferred parameters of interest using the same priors. We found that, for large phylogenies, the accuracy of our regression-ABC is comparable to that of likelihood-based approaches involving birth-death processes implemented in BEAST2. Our approach even outperformed these when inferring the host population size with a Susceptible-Infected-Removed epidemiological model. It also clearly outperformed a recent kernel-ABC approach when assuming a Susceptible-Infected epidemiological model with two host types. Lastly, by re-analyzing data from the early stages of the recent Ebola epidemic in Sierra Leone, we showed that regression-ABC provides more realistic estimates for the duration parameters (latency and infectiousness) than the likelihood-based method. Overall, ABC based on a large variety of summary statistics and a regression method able to perform variable selection and avoid overfitting is a promising approach to analyze large phylogenies. PMID:28263987
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piao, J; PLA 302 Hospital, Beijing; Xu, S
2016-06-15
Purpose: This study will use Monte Carlo to simulate the Cyberknife system, and intend to develop the third-party tool to evaluate the dose verification of specific patient plans in TPS. Methods: By simulating the treatment head using the BEAMnrc and DOSXYZnrc software, the comparison between the calculated and measured data will be done to determine the beam parameters. The dose distribution calculated in the Raytracing, Monte Carlo algorithms of TPS (Multiplan Ver4.0.2) and in-house Monte Carlo simulation method for 30 patient plans, which included 10 head, lung and liver cases in each, were analyzed. The γ analysis with the combinedmore » 3mm/3% criteria would be introduced to quantitatively evaluate the difference of the accuracy between three algorithms. Results: More than 90% of the global error points were less than 2% for the comparison of the PDD and OAR curves after determining the mean energy and FWHM.The relative ideal Monte Carlo beam model had been established. Based on the quantitative evaluation of dose accuracy for three algorithms, the results of γ analysis shows that the passing rates (84.88±9.67% for head,98.83±1.05% for liver,98.26±1.87% for lung) of PTV in 30 plans between Monte Carlo simulation and TPS Monte Carlo algorithms were good. And the passing rates (95.93±3.12%,99.84±0.33% in each) of PTV in head and liver plans between Monte Carlo simulation and TPS Ray-tracing algorithms were also good. But the difference of DVHs in lung plans between Monte Carlo simulation and Ray-tracing algorithms was obvious, and the passing rate (51.263±38.964%) of γ criteria was not good. It is feasible that Monte Carlo simulation was used for verifying the dose distribution of patient plans. Conclusion: Monte Carlo simulation algorithm developed in the CyberKnife system of this study can be used as a reference tool for the third-party tool, which plays an important role in dose verification of patient plans. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11275105). Thanks for the support from Accuray Corp.« less
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.
Theobald, Douglas L; Wuttke, Deborah S
2006-09-01
THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.
On the Likelihood Ratio Test for the Number of Factors in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.; Yuan, Ke-Hai
2007-01-01
In the exploratory factor analysis, when the number of factors exceeds the true number of factors, the likelihood ratio test statistic no longer follows the chi-square distribution due to a problem of rank deficiency and nonidentifiability of model parameters. As a result, decisions regarding the number of factors may be incorrect. Several…
Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories
NASA Technical Reports Server (NTRS)
Olds, John; Way, David
2001-01-01
Recently, strong evidence of liquid water under the surface of Mars and a meteorite that might contain ancient microbes have renewed interest in Mars exploration. With this renewed interest, NASA plans to send spacecraft to Mars approx. every 26 months. These future spacecraft will return higher-resolution images, make precision landings, engage in longer-ranging surface maneuvers, and even return Martian soil and rock samples to Earth. Future robotic missions and any human missions to Mars will require precise entries to ensure safe landings near science objective and pre-employed assets. Potential sources of water and other interesting geographic features are often located near hazards, such as within craters or along canyon walls. In order for more accurate landings to be made, spacecraft entering the Martian atmosphere need to use lift to actively control the entry. This active guidance results in much smaller landing footprints. Planning for these missions will depend heavily on Monte Carlo analysis. Monte Carlo trajectory simulations have been used with a high degree of success in recent planetary exploration missions. These analyses ascertain the impact of off-nominal conditions during a flight and account for uncertainty. Uncertainties generally stem from limitations in manufacturing tolerances, measurement capabilities, analysis accuracies, and environmental unknowns. Thousands of off-nominal trajectories are simulated by randomly dispersing uncertainty variables and collecting statistics on forecast variables. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecasts outputs. It lacks a mechanism to affect or alter the uncertainties based on the forecast results. If the results are unacceptable, the current practice is to use an iterative, trial-and-error approach to reconcile discrepancies. Therefore, an improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively.
NASA Astrophysics Data System (ADS)
Allaf, M. Athari; Shahriari, M.; Sohrabpour, M.
2004-04-01
A new method using Monte Carlo source simulation of interference reactions in neutron activation analysis experiments has been developed. The neutron spectrum at the sample location has been simulated using the Monte Carlo code MCNP and the contributions of different elements to produce a specified gamma line have been determined. The produced response matrix has been used to measure peak areas and the sample masses of the elements of interest. A number of benchmark experiments have been performed and the calculated results verified against known values. The good agreement obtained between the calculated and known values suggests that this technique may be useful for the elimination of interference reactions in neutron activation analysis.
Monte Carlo Analysis as a Trajectory Design Driver for the TESS Mission
NASA Technical Reports Server (NTRS)
Nickel, Craig; Lebois, Ryan; Lutz, Stephen; Dichmann, Donald; Parker, Joel
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.
Systematic effects of foreground removal in 21-cm surveys of reionization
NASA Astrophysics Data System (ADS)
Petrovic, Nada; Oh, S. Peng
2011-05-01
21-cm observations have the potential to revolutionize our understanding of the high-redshift Universe. Whilst extremely bright radio continuum foregrounds exist at these frequencies, their spectral smoothness can be exploited to allow efficient foreground subtraction. It is well known that - regardless of other instrumental effects - this removes power on scales comparable to the survey bandwidth. We investigate associated systematic biases. We show that removing line-of-sight fluctuations on large scales aliases into suppression of the 3D power spectrum across a broad range of scales. This bias can be dealt with by correctly marginalizing over small wavenumbers in the 1D power spectrum; however, the unbiased estimator will have unavoidably larger variance. We also show that Gaussian realizations of the power spectrum permit accurate and extremely rapid Monte Carlo simulations for error analysis; repeated realizations of the fully non-Gaussian field are unnecessary. We perform Monte Carlo maximum likelihood simulations of foreground removal which yield unbiased, minimum variance estimates of the power spectrum in agreement with Fisher matrix estimates. Foreground removal also distorts the 21-cm probability distribution function (PDF), reducing the contrast between neutral and ionized regions, with potentially serious consequences for efforts to extract information from the PDF. We show that it is the subtraction of large-scale modes which is responsible for this distortion, and that it is less severe in the earlier stages of reionization. It can be reduced by using larger bandwidths. In the late stages of reionization, identification of the largest ionized regions (which consist of foreground emission only) provides calibration points which potentially allow recovery of large-scale modes. Finally, we also show that (i) the broad frequency response of synchrotron and free-free emission will smear out any features in the electron momentum distribution and ensure spectrally smooth foregrounds and (ii) extragalactic radio recombination lines should be negligible foregrounds.
NASA Astrophysics Data System (ADS)
Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.
1997-02-01
The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.
Computer program uses Monte Carlo techniques for statistical system performance analysis
NASA Technical Reports Server (NTRS)
Wohl, D. P.
1967-01-01
Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).
Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.
Beentjes, Casper H L; Baker, Ruth E
2018-05-25
Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.
Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei
2016-03-01
We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.
First Monte Carlo analysis of fragmentation functions from single-inclusive e + e - annihilation
Sato, Nobuo; Ethier, J. J.; Melnitchouk, W.; ...
2016-12-02
Here, we perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive $e^+ e^-$ annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well contrained by the data, and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.
NASA Astrophysics Data System (ADS)
Cramer, S. N.; Roussin, R. W.
1981-11-01
A Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield is presented. The energy range covered in the analysis is 15-2 MeV for neutron source energies. The multigroup MORSE code was used with the VITAMIN C 171-36 neutron-gamma-ray cross-section data set. Both neutron and gamma-ray count rates and unfolded energy spectra are presented and compared, with good general agreement, with experimental results.
Mathew, Boby; Holand, Anna Marie; Koistinen, Petri; Léon, Jens; Sillanpää, Mikko J
2016-02-01
A novel reparametrization-based INLA approach as a fast alternative to MCMC for the Bayesian estimation of genetic parameters in multivariate animal model is presented. Multi-trait genetic parameter estimation is a relevant topic in animal and plant breeding programs because multi-trait analysis can take into account the genetic correlation between different traits and that significantly improves the accuracy of the genetic parameter estimates. Generally, multi-trait analysis is computationally demanding and requires initial estimates of genetic and residual correlations among the traits, while those are difficult to obtain. In this study, we illustrate how to reparametrize covariance matrices of a multivariate animal model/animal models using modified Cholesky decompositions. This reparametrization-based approach is used in the Integrated Nested Laplace Approximation (INLA) methodology to estimate genetic parameters of multivariate animal model. Immediate benefits are: (1) to avoid difficulties of finding good starting values for analysis which can be a problem, for example in Restricted Maximum Likelihood (REML); (2) Bayesian estimation of (co)variance components using INLA is faster to execute than using Markov Chain Monte Carlo (MCMC) especially when realized relationship matrices are dense. The slight drawback is that priors for covariance matrices are assigned for elements of the Cholesky factor but not directly to the covariance matrix elements as in MCMC. Additionally, we illustrate the concordance of the INLA results with the traditional methods like MCMC and REML approaches. We also present results obtained from simulated data sets with replicates and field data in rice.
Lobach, Iryna; Mallick, Bani; Carroll, Raymond J
2011-01-01
Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development.
ERIC Educational Resources Information Center
Khattab, Ali-Maher; And Others
1982-01-01
A causal modeling system, using confirmatory maximum likelihood factor analysis with the LISREL IV computer program, evaluated the construct validity underlying the higher order factor structure of a given correlation matrix of 46 structure-of-intellect tests emphasizing the product of transformations. (Author/PN)
Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model
NASA Astrophysics Data System (ADS)
Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.
2018-04-01
While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.
NASA Astrophysics Data System (ADS)
Gugsa, Solomon A.; Davies, Angela
2005-08-01
Characterizing an aspheric micro lens is critical for understanding the performance and providing feedback to the manufacturing. We describe a method to find the best-fit conic of an aspheric micro lens using a least squares minimization and Monte Carlo analysis. Our analysis is based on scanning white light interferometry measurements, and we compare the standard rapid technique where a single measurement is taken of the apex of the lens to the more time-consuming stitching technique where more surface area is measured. Both are corrected for tip/tilt based on a planar fit to the substrate. Four major parameters and their uncertainties are estimated from the measurement and a chi-square minimization is carried out to determine the best-fit conic constant. The four parameters are the base radius of curvature, the aperture of the lens, the lens center, and the sag of the lens. A probability distribution is chosen for each of the four parameters based on the measurement uncertainties and a Monte Carlo process is used to iterate the minimization process. Eleven measurements were taken and data is also chosen randomly from the group during the Monte Carlo simulation to capture the measurement repeatability. A distribution of best-fit conic constants results, where the mean is a good estimate of the best-fit conic and the distribution width represents the combined measurement uncertainty. We also compare the Monte Carlo process for the stitched data and the not stitched data. Our analysis allows us to analyze the residual surface error in terms of Zernike polynomials and determine uncertainty estimates for each coefficient.
Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio
2018-03-01
To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...
Monte Carlo Approach for Reliability Estimations in Generalizability Studies.
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.
A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Scott; Haslauer, Claus P.; Cirpka, Olaf A.
2017-01-05
The key points of this presentation were to approach the problem of linking breakthrough curve shape (RP-CTRW transition distribution) to structural parameters from a Monte Carlo approach and to use the Monte Carlo analysis to determine any empirical error
Detection methods for non-Gaussian gravitational wave stochastic backgrounds
NASA Astrophysics Data System (ADS)
Drasco, Steve; Flanagan, Éanna É.
2003-04-01
A gravitational wave stochastic background can be produced by a collection of independent gravitational wave events. There are two classes of such backgrounds, one for which the ratio of the average time between events to the average duration of an event is small (i.e., many events are on at once), and one for which the ratio is large. In the first case the signal is continuous, sounds something like a constant hiss, and has a Gaussian probability distribution. In the second case, the discontinuous or intermittent signal sounds something like popcorn popping, and is described by a non-Gaussian probability distribution. In this paper we address the issue of finding an optimal detection method for such a non-Gaussian background. As a first step, we examine the idealized situation in which the event durations are short compared to the detector sampling time, so that the time structure of the events cannot be resolved, and we assume white, Gaussian noise in two collocated, aligned detectors. For this situation we derive an appropriate version of the maximum likelihood detection statistic. We compare the performance of this statistic to that of the standard cross-correlation statistic both analytically and with Monte Carlo simulations. In general the maximum likelihood statistic performs better than the cross-correlation statistic when the stochastic background is sufficiently non-Gaussian, resulting in a gain factor in the minimum gravitational-wave energy density necessary for detection. This gain factor ranges roughly between 1 and 3, depending on the duty cycle of the background, for realistic observing times and signal strengths for both ground and space based detectors. The computational cost of the statistic, although significantly greater than that of the cross-correlation statistic, is not unreasonable. Before the statistic can be used in practice with real detector data, further work is required to generalize our analysis to accommodate separated, misaligned detectors with realistic, colored, non-Gaussian noise.
Pedretti, Daniele; Mayer, K Ulrich; Beckie, Roger D
2017-06-01
In mining environmental applications, it is important to assess water quality from waste rock piles (WRPs) and estimate the likelihood of acid rock drainage (ARD) over time. The mineralogical heterogeneity of WRPs is a source of uncertainty in this assessment, undermining the reliability of traditional bulk indicators used in the industry. We focused in this work on the bulk neutralizing potential ratio (NPR), which is defined as the ratio of the content of non-acid-generating minerals (typically reactive carbonates such as calcite) to the content of potentially acid-generating minerals (typically sulfides such as pyrite). We used a streamtube-based Monte-Carlo method to show why and to what extent bulk NPR can be a poor indicator of ARD occurrence. We simulated ensembles of WRPs identical in their geometry and bulk NPR, which only differed in their initial distribution of the acid generating and acid neutralizing minerals that control NPR. All models simulated the same principal acid-producing, acid-neutralizing and secondary mineral forming processes. We show that small differences in the distribution of local NPR values or the number of flow paths that generate acidity strongly influence drainage pH. The results indicate that the likelihood of ARD (epitomized by the probability of occurrence of pH<4 in a mixing boundary) within the first 100years can be as high as 75% for a NPR=2 and 40% for NPR=4. The latter is traditionally considered as a "universally safe" threshold to ensure non-acidic waters in practical applications. Our results suggest that new methods that explicitly account for mineralogical heterogeneity must be sought when computing effective (upscaled) NPR values at the scale of the piles. Copyright © 2017 Elsevier B.V. All rights reserved.
Bayesian Statistical Inference in Ion-Channel Models with Exact Missed Event Correction.
Epstein, Michael; Calderhead, Ben; Girolami, Mark A; Sivilotti, Lucia G
2016-07-26
The stochastic behavior of single ion channels is most often described as an aggregated continuous-time Markov process with discrete states. For ligand-gated channels each state can represent a different conformation of the channel protein or a different number of bound ligands. Single-channel recordings show only whether the channel is open or shut: states of equal conductance are aggregated, so transitions between them have to be inferred indirectly. The requirement to filter noise from the raw signal further complicates the modeling process, as it limits the time resolution of the data. The consequence of the reduced bandwidth is that openings or shuttings that are shorter than the resolution cannot be observed; these are known as missed events. Postulated models fitted using filtered data must therefore explicitly account for missed events to avoid bias in the estimation of rate parameters and therefore assess parameter identifiability accurately. In this article, we present the first, to our knowledge, Bayesian modeling of ion-channels with exact missed events correction. Bayesian analysis represents uncertain knowledge of the true value of model parameters by considering these parameters as random variables. This allows us to gain a full appreciation of parameter identifiability and uncertainty when estimating values for model parameters. However, Bayesian inference is particularly challenging in this context as the correction for missed events increases the computational complexity of the model likelihood. Nonetheless, we successfully implemented a two-step Markov chain Monte Carlo method that we called "BICME", which performs Bayesian inference in models of realistic complexity. The method is demonstrated on synthetic and real single-channel data from muscle nicotinic acetylcholine channels. We show that parameter uncertainty can be characterized more accurately than with maximum-likelihood methods. Our code for performing inference in these ion channel models is publicly available. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis
NASA Technical Reports Server (NTRS)
Hanson, J. M.; Beard, B. B.
2010-01-01
This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.
Program for Weibull Analysis of Fatigue Data
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2005-01-01
A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.
Monte Carlo Methods in Materials Science Based on FLUKA and ROOT
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor
2003-01-01
A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the CERN ALICE (A Large Ion Collisions Experiment) software team through an adaptation of their existing AliROOT (ALICE Using ROOT) architecture. In order to check our progress against actual data, we have chosen to simulate the ATIC14 (Advanced Thin Ionization Calorimeter) cosmic-ray astrophysics balloon payload as well as neutron fluences in the Mir spacecraft. This paper contains a summary of status of this project, and a roadmap to its successful completion.
The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution
NASA Astrophysics Data System (ADS)
Shin, H.; Heo, J.; Kim, T.; Jung, Y.
2007-12-01
The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.
Detecting past changes of effective population size
Nikolic, Natacha; Chevalet, Claude
2014-01-01
Understanding and predicting population abundance is a major challenge confronting scientists. Several genetic models have been developed using microsatellite markers to estimate the present and ancestral effective population sizes. However, to get an overview on the evolution of population requires that past fluctuation of population size be traceable. To address the question, we developed a new model estimating the past changes of effective population size from microsatellite by resolving coalescence theory and using approximate likelihoods in a Monte Carlo Markov Chain approach. The efficiency of the model and its sensitivity to gene flow and to assumptions on the mutational process were checked using simulated data and analysis. The model was found especially useful to provide evidence of transient changes of population size in the past. The times at which some past demographic events cannot be detected because they are too ancient and the risk that gene flow may suggest the false detection of a bottleneck are discussed considering the distribution of coalescence times. The method was applied on real data sets from several Atlantic salmon populations. The method called VarEff (Variation of Effective size) was implemented in the R package VarEff and is made available at https://qgsp.jouy.inra.fr and at http://cran.r-project.org/web/packages/VarEff. PMID:25067949
Comparing hierarchical models via the marginalized deviance information criterion.
Quintero, Adrian; Lesaffre, Emmanuel
2018-07-20
Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.
Unsupervised Detection of Planetary Craters by a Marked Point Process
NASA Technical Reports Server (NTRS)
Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.
2011-01-01
With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.
Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; ...
2015-06-03
One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posteriormore » is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.« less
Sensitivity analysis of pulse pileup model parameter in photon counting detectors
NASA Astrophysics Data System (ADS)
Shunhavanich, Picha; Pelc, Norbert J.
2017-03-01
Photon counting detectors (PCDs) may provide several benefits over energy-integrating detectors (EIDs), including spectral information for tissue characterization and the elimination of electronic noise. PCDs, however, suffer from pulse pileup, which distorts the detected spectrum and degrades the accuracy of material decomposition. Several analytical models have been proposed to address this problem. The performance of these models are dependent on the assumptions used, including the estimated pulse shape whose parameter values could differ from the actual physical ones. As the incident flux increases and the corrections become more significant the needed parameter value accuracy may be more crucial. In this work, the sensitivity of model parameter accuracies is analyzed for the pileup model of Taguchi et al. The spectra distorted by pileup at different count rates are simulated using either the model or Monte Carlo simulations, and the basis material thicknesses are estimated by minimizing the negative log-likelihood with Poisson or multivariate Gaussian distributions. From simulation results, we find that the accuracy of the deadtime, the height of pulse negative tail, and the timing to the end of the pulse are more important than most other parameters, and they matter more with increasing count rate. This result can help facilitate further work on parameter calibrations.
A comparison of two indices for the intraclass correlation coefficient.
Shieh, Gwowen
2012-12-01
In the present study, we examined the behavior of two indices for measuring the intraclass correlation in the one-way random effects model: the prevailing ICC(1) (Fisher, 1938) and the corrected eta-squared (Bliese & Halverson, 1998). These two procedures differ both in their methods of estimating the variance components that define the intraclass correlation coefficient and in their performance of bias and mean squared error in the estimation of the intraclass correlation coefficient. In contrast with the natural unbiased principle used to construct ICC(1), in the present study it was analytically shown that the corrected eta-squared estimator is identical to the maximum likelihood estimator and the pairwise estimator under equal group sizes. Moreover, the empirical results obtained from the present Monte Carlo simulation study across various group structures revealed the mutual dominance relationship between their truncated versions for negative values. The corrected eta-squared estimator performs better than the ICC(1) estimator when the underlying population intraclass correlation coefficient is small. Conversely, ICC(1) has a clear advantage over the corrected eta-squared for medium and large magnitudes of population intraclass correlation coefficient. The conceptual description and numerical investigation provide guidelines to help researchers choose between the two indices for more accurate reliability analysis in multilevel research.
Statistical significance estimation of a signal within the GooFit framework on GPUs
NASA Astrophysics Data System (ADS)
Cristella, Leonardo; Di Florio, Adriano; Pompili, Alexis
2017-03-01
In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B+ → J/ψϕK+. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.
NASA Astrophysics Data System (ADS)
Di Florio, Adriano
2017-10-01
In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B + → J/ψϕK +. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.
Farid, Suzanne S; Washbrook, John; Titchener-Hooker, Nigel J
2005-01-01
This paper presents the application of a decision-support tool, SIMBIOPHARMA, for assessing different manufacturing strategies under uncertainty for the production of biopharmaceuticals. SIMBIOPHARMA captures both the technical and business aspects of biopharmaceutical manufacture within a single tool that permits manufacturing alternatives to be evaluated in terms of cost, time, yield, project throughput, resource utilization, and risk. Its use for risk analysis is demonstrated through a hypothetical case study that uses the Monte Carlo simulation technique to imitate the randomness inherent in manufacturing subject to technical and market uncertainties. The case study addresses whether start-up companies should invest in a stainless steel pilot plant or use disposable equipment for the production of early phase clinical trial material. The effects of fluctuating product demands and titers on the performance of a biopharmaceutical company manufacturing clinical trial material are analyzed. The analysis highlights the impact of different manufacturing options on the range in possible outcomes for the project throughput and cost of goods and the likelihood that these metrics exceed a critical threshold. The simulation studies highlight the benefits of incorporating uncertainties when evaluating manufacturing strategies. Methods of presenting and analyzing information generated by the simulations are suggested. These are used to help determine the ranking of alternatives under different scenarios. The example illustrates the benefits to companies of using such a tool to improve management of their R&D portfolios so as to control the cost of goods.
An at-site flood estimation method in the context of nonstationarity I. A simulation study
NASA Astrophysics Data System (ADS)
Gado, Tamer A.; Nguyen, Van-Thanh-Van
2016-04-01
The stationarity of annual flood peak records is the traditional assumption of flood frequency analysis. In some cases, however, as a result of land-use and/or climate change, this assumption is no longer valid. Therefore, new statistical models are needed to capture dynamically the change of probability density functions over time, in order to obtain reliable flood estimation. In this study, an innovative method for nonstationary flood frequency analysis was presented. Here, the new method is based on detrending the flood series and applying the L-moments along with the GEV distribution to the transformed ;stationary; series (hereafter, this is called the LM-NS). The LM-NS method was assessed through a comparative study with the maximum likelihood (ML) method for the nonstationary GEV model, as well as with the stationary (S) GEV model. The comparative study, based on Monte Carlo simulations, was carried out for three nonstationary GEV models: a linear dependence of the mean on time (GEV1), a quadratic dependence of the mean on time (GEV2), and linear dependence in both the mean and log standard deviation on time (GEV11). The simulation results indicated that the LM-NS method performs better than the ML method for most of the cases studied, whereas the stationary method provides the least accurate results. An additional advantage of the LM-NS method is to avoid the numerical problems (e.g., convergence problems) that may occur with the ML method when estimating parameters for small data samples.
Geng, Menghan; Qi, Hongjuan; Liu, Xuelin; Gao, Bo; Yang, Zhan; Lu, Wei; Sun, Rubao
2016-05-01
The potential contaminations of 16 trace elements (Cr, Mn, Ni, Cu, Zn, As, Cd, Sb, Ba, Pb, Co, Be, V, Ti, Tl, Al) in drinking water collected in two remote areas in China were analyzed. The average levels of the trace elements were lower than the allowable concentrations set by national agencies, except for several elements (As, Sb, Mn, and Be) in individual samples. A health risk assessment model was conducted and carcinogenic and non-carcinogenic risks were evaluated separately. The results indicated that the total carcinogenic risks were higher than the maximum allowed risk level set by most organizations (1 × 10(-6)). Residents in both study areas were at risk of carcinogenic effects from exposure to Cr, which accounted for 80-90 % of the total carcinogenic risks. The non-carcinogenic risks (Cu, Zn, Ni) were lower than the maximum allowance levels. Among the four population groups, infants incurred the highest health risks and required special attention. Correlation analysis revealed significant positive associations among most trace elements, indicating the likelihood of a common source. The results of probabilistic health risk assessment of Cr based on Monte-Carlo simulation revealed that the uncertainty of system parameters does not affect the decision making of pollution prevention and control. Sensitivity analysis revealed that ingestion rate of water and concentration of Cr showed relatively high sensitivity to the health risks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Piburn, Jesse O; McManamay, Ryan A
2017-01-01
Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.
Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G
2006-01-01
The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations.
Metis: A Pure Metropolis Markov Chain Monte Carlo Bayesian Inference Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bates, Cameron Russell; Mckigney, Edward Allen
The use of Bayesian inference in data analysis has become the standard for large scienti c experiments [1, 2]. The Monte Carlo Codes Group(XCP-3) at Los Alamos has developed a simple set of algorithms currently implemented in C++ and Python to easily perform at-prior Markov Chain Monte Carlo Bayesian inference with pure Metropolis sampling. These implementations are designed to be user friendly and extensible for customization based on speci c application requirements. This document describes the algorithmic choices made and presents two use cases.
NASA Technical Reports Server (NTRS)
Nickel, Craig; Parker, Joel; Dichmann, Don; Lebois, Ryan; Lutz, Stephen
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.
NASA Technical Reports Server (NTRS)
Cash, W.
1979-01-01
Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.
From least squares to multilevel modeling: A graphical introduction to Bayesian inference
NASA Astrophysics Data System (ADS)
Loredo, Thomas J.
2016-01-01
This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.
NASA Technical Reports Server (NTRS)
Sielken, R. L., Jr. (Principal Investigator)
1981-01-01
Several methods of estimating individual crop acreages using a mixture of completely identified and partially identified (generic) segments from a single growing year are derived and discussed. A small Monte Carlo study of eight estimators is presented. The relative empirical behavior of these estimators is discussed as are the effects of segment sample size and amount of partial identification. The principle recommendations are (1) to not exclude, but rather incorporate partially identified sample segments into the estimation procedure, (2) try to avoid having a large percentage (say 80%) of only partially identified segments, in the sample, and (3) use the maximum likelihood estimator although the weighted least squares estimator and least squares ratio estimator both perform almost as well. Sets of spring small grains (North Dakota) data were used.
CheckMATE 2: From the model to the limit
NASA Astrophysics Data System (ADS)
Dercks, Daniel; Desai, Nishita; Kim, Jong Soo; Rolbiecki, Krzysztof; Tattersall, Jamie; Weber, Torsten
2017-12-01
We present the latest developments to the CheckMATE program that allows models of new physics to be easily tested against the recent LHC data. To achieve this goal, the core of CheckMATE now contains over 60 LHC analyses of which 12 are from the 13 TeV run. The main new feature is that CheckMATE 2 now integrates the Monte Carlo event generation via MadGraph5_aMC@NLO and Pythia 8. This allows users to go directly from a SLHA file or UFO model to the result of whether a model is allowed or not. In addition, the integration of the event generation leads to a significant increase in the speed of the program. Many other improvements have also been made, including the possibility to now combine signal regions to give a total likelihood for a model.
Revisiting tests for neglected nonlinearity using artificial neural networks.
Cho, Jin Seo; Ishida, Isao; White, Halbert
2011-05-01
Tests for regression neglected nonlinearity based on artificial neural networks (ANNs) have so far been studied by separately analyzing the two ways in which the null of regression linearity can hold. This implies that the asymptotic behavior of general ANN-based tests for neglected nonlinearity is still an open question. Here we analyze a convenient ANN-based quasi-likelihood ratio statistic for testing neglected nonlinearity, paying careful attention to both components of the null. We derive the asymptotic null distribution under each component separately and analyze their interaction. Somewhat remarkably, it turns out that the previously known asymptotic null distribution for the type 1 case still applies, but under somewhat stronger conditions than previously recognized. We present Monte Carlo experiments corroborating our theoretical results and showing that standard methods can yield misleading inference when our new, stronger regularity conditions are violated.
Forecasting overhaul or replacement intervals based on estimated system failure intensity
NASA Astrophysics Data System (ADS)
Gannon, James M.
1994-12-01
System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.
TDRS orbit determination by radio interferometry
NASA Technical Reports Server (NTRS)
Pavloff, Michael S.
1994-01-01
In support of a NASA study on the application of radio interferometry to satellite orbit determination, MITRE developed a simulation tool for assessing interferometry tracking accuracy. The Orbit Determination Accuracy Estimator (ODAE) models the general batch maximum likelihood orbit determination algorithms of the Goddard Trajectory Determination System (GTDS) with the group and phase delay measurements from radio interferometry. ODAE models the statistical properties of tracking error sources, including inherent observable imprecision, atmospheric delays, clock offsets, station location uncertainty, and measurement biases, and through Monte Carlo simulation, ODAE calculates the statistical properties of errors in the predicted satellites state vector. This paper presents results from ODAE application to orbit determination of the Tracking and Data Relay Satellite (TDRS) by radio interferometry. Conclusions about optimal ground station locations for interferometric tracking of TDRS are presented, along with a discussion of operational advantages of radio interferometry.
ATAC Autocuer Modeling Analysis.
1981-01-01
the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of
NASA Astrophysics Data System (ADS)
Zhang, Siyuan; Li, Liang; Li, Ruizhe; Chen, Zhiqiang
2017-11-01
We present the design concept and initial simulations for a polychromatic full-field fan-beam x-ray fluorescence computed tomography (XFCT) device with pinhole collimators and linear-array photon counting detectors. The phantom is irradiated by a fan-beam polychromatic x-ray source filtered by copper. Fluorescent photons are stimulated and then collected by two linear-array photon counting detectors with pinhole collimators. The Compton scatter correction and the attenuation correction are applied in the data processing, and the maximum-likelihood expectation maximization algorithm is applied for the image reconstruction of XFCT. The physical modeling of the XFCT imaging system was described, and a set of rapid Monte Carlo simulations was carried out to examine the feasibility and sensitivity of the XFCT system. Different concentrations of gadolinium (Gd) and gold (Au) solutions were used as contrast agents in simulations. Results show that 0.04% of Gd and 0.065% of Au can be well reconstructed with the full scan time set at 6 min. Compared with using the XFCT system with a pencil-beam source or a single-pixel detector, using a full-field fan-beam XFCT device with linear-array detectors results in significant scanning time reduction and may satisfy requirements of rapid imaging, such as in vivo imaging experiments.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.
2015-01-01
In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152
NASA Astrophysics Data System (ADS)
E Derenzo, Stephen
2017-05-01
This paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Another Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm3 scintillators: Lu2SiO5:Ce,Ca (LSO), LaBr3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. The examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values.
Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.
2015-01-01
We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.
Maximum likelihood decoding analysis of accumulate-repeat-accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, A.; Divsalar, D.; Yao, K.
2004-01-01
In this paper, the performance of the repeat-accumulate codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. Some simple codes are shown that perform very close to Shannon limit with maximum likelihood decoding.
Monte Carlo Methodology Serves Up a Software Success
NASA Technical Reports Server (NTRS)
2003-01-01
Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.
Shi, Wei; Wei, Si; Hu, Xin-xin; Hu, Guan-jiu; Chen, Cu-lan; Wang, Xin-ru; Giesy, John P.; Yu, Hong-xia
2013-01-01
Some synthetic chemicals, which have been shown to disrupt thyroid hormone (TH) function, have been detected in surface waters and people have the potential to be exposed through water-drinking. Here, the presence of thyroid-active chemicals and their toxic potential in drinking water sources in Yangtze River Delta were investigated by use of instrumental analysis combined with cell-based reporter gene assay. A novel approach was developed to use Monte Carlo simulation, for evaluation of the potential risks of measured concentrations of TH agonists and antagonists and to determine the major contributors to observed thyroid receptor (TR) antagonist potency. None of the extracts exhibited TR agonist potency, while 12 of 14 water samples exhibited TR antagonistic potency. The most probable observed antagonist equivalents ranged from 1.4 to 5.6 µg di-n-butyl phthalate (DNBP)/L, which posed potential risk in water sources. Based on Monte Carlo simulation related mass balance analysis, DNBP accounted for 64.4% for the entire observed antagonist toxic unit in water sources, while diisobutyl phthalate (DIBP), di-n-octyl phthalate (DNOP) and di-2-ethylhexyl phthalate (DEHP) also contributed. The most probable observed equivalent and most probable relative potency (REP) derived from Monte Carlo simulation is useful for potency comparison and responsible chemicals screening. PMID:24204563
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
Hospital mergers and market overlap.
Brooks, G R; Jones, V G
1997-01-01
OBJECTIVE: To address two questions: What are the characteristics of hospitals that affect the likelihood of their being involved in a merger? What characteristics of particular pairs of hospitals affect the likelihood of the pair engaging in a merger? DATA SOURCES/STUDY SETTING: Hospitals in the 12 county region surrounding the San Francisco Bay during the period 1983 to 1992 were the focus of the study. Data were drawn from secondary sources, including the Lexis/Nexis database, the American Hospital Association, and the Office of Statewide Health Planning and Development of the State of California. STUDY DESIGN: Seventeen hospital mergers during the study period were identified. A random sample of pairs of hospitals that did not merge was drawn to establish a statistically efficient control set. Models constructed from hypotheses regarding hospital and market characteristics believed to be related to merger likelihood were tested using logistic regression analysis. DATA COLLECTION: See Data Sources/Study Setting. PRINCIPAL FINDINGS: The analysis shows that the likelihood of a merger between a particular pair of hospitals is positively related to the degree of market overlap that exists between them. Furthermore, market overlap and performance difference interact in their effect on merger likelihood. In an analysis of individual hospitals, conditions of rivalry, hospital market share, and hospital size were not found to influence the likelihood that a hospital will engage in a merger. CONCLUSIONS: Mergers between hospitals are not driven directly by considerations of market power or efficiency as much as by the existence of specific merger opportunities in the hospitals' local markets. Market overlap is a condition that enables a merger to occur, but other factors, such as the relative performance levels of the hospitals in question and their ownership and teaching status, also play a role in influencing the likelihood that a merger will in fact take place. PMID:9018212
Dynamic Financial Constraints: Distinguishing Mechanism Design from Exogenously Incomplete Regimes*
Karaivanov, Alexander; Townsend, Robert M.
2014-01-01
We formulate and solve a range of dynamic models of constrained credit/insurance that allow for moral hazard and limited commitment. We compare them to full insurance and exogenously incomplete financial regimes (autarky, saving only, borrowing and lending in a single asset). We develop computational methods based on mechanism design, linear programming, and maximum likelihood to estimate, compare, and statistically test these alternative dynamic models with financial/information constraints. Our methods can use both cross-sectional and panel data and allow for measurement error and unobserved heterogeneity. We estimate the models using data on Thai households running small businesses from two separate samples. We find that in the rural sample, the exogenously incomplete saving only and borrowing regimes provide the best fit using data on consumption, business assets, investment, and income. Family and other networks help consumption smoothing there, as in a moral hazard constrained regime. In contrast, in urban areas, we find mechanism design financial/information regimes that are decidedly less constrained, with the moral hazard model fitting best combined business and consumption data. We perform numerous robustness checks in both the Thai data and in Monte Carlo simulations and compare our maximum likelihood criterion with results from other metrics and data not used in the estimation. A prototypical counterfactual policy evaluation exercise using the estimation results is also featured. PMID:25246710
Testing the non-unity of rate ratio under inverse sampling.
Tang, Man-Lai; Liao, Yi Jie; Ng, Hong Keung Tony; Chan, Ping Shing
2007-08-01
Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi
2017-07-04
BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.
Measurement of CIB power spectra with CAM-SPEC from Planck HFI maps
NASA Astrophysics Data System (ADS)
Mak, Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine
2015-08-01
We present new measurements of the cosmic infrared background (CIB) anisotropies and its first likelihood using Planck HFI data at 353, 545, and 857 GHz. The measurements are based on cross-frequency power spectra and likelihood analysis using the CAM-SPEC package, rather than map based template removal of foregrounds as done in previous Planck CIB analysis. We construct the likelihood of the CIB temperature fluctuations, an extension of CAM-SPEC likelihood as used in CMB analysis to higher frequency, and use it to drive the best estimate of the CIB power spectrum over three decades in multiple moment, l, covering 50 ≤ l ≤ 2500. We adopt parametric models of the CIB and foreground contaminants (Galactic cirrus, infrared point sources, and cosmic microwave background anisotropies), and calibrate the dataset uniformly across frequencies with known Planck beam and noise properties in the likelihood construction. We validate our likelihood through simulations and extensive suite of consistency tests, and assess the impact of instrumental and data selection effects on the final CIB power spectrum constraints. Two approaches are developed for interpreting the CIB power spectrum. The first approach is based on simple parametric model which model the cross frequency power using amplitudes, correlation coefficients, and known multipole dependence. The second approach is based on the physical models for galaxy clustering and the evolution of infrared emission of galaxies. The new approaches fit all auto- and cross- power spectra very well, with the best fit of χ2ν = 1.04 (parametric model). Using the best foreground solution, we find that the cleaned CIB power spectra are in good agreement with previous Planck and Herschel measurements.
NASA Technical Reports Server (NTRS)
Stolarski, R. S.; Butler, D. M.; Rundel, R. D.
1977-01-01
A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.
NASA Astrophysics Data System (ADS)
Graham, Eleanor; Cuore Collaboration
2017-09-01
The CUORE experiment is a large-scale bolometric detector seeking to observe the never-before-seen process of neutrinoless double beta decay. Predictions for CUORE's sensitivity to neutrinoless double beta decay allow for an understanding of the half-life ranges that the detector can probe, and also to evaluate the relative importance of different detector parameters. Currently, CUORE uses a Bayesian analysis based in BAT, which uses Metropolis-Hastings Markov Chain Monte Carlo, for its sensitivity studies. My work evaluates the viability and potential improvements of switching the Bayesian analysis to Hamiltonian Monte Carlo, realized through the program Stan and its Morpho interface. I demonstrate that the BAT study can be successfully recreated in Stan, and perform a detailed comparison between the results and computation times of the two methods.
92 Years of the Ising Model: A High Resolution Monte Carlo Study
NASA Astrophysics Data System (ADS)
Xu, Jiahao; Ferrenberg, Alan M.; Landau, David P.
2018-04-01
Using extensive Monte Carlo simulations that employ the Wolff cluster flipping and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising model with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, we obtained the critical inverse temperature K c = 0.221 654 626(5) and the critical exponent of the correlation length ν = 0.629 912(86) with precision that improves upon previous Monte Carlo estimates.
HepSim: A repository with predictions for high-energy physics experiments
Chekanov, S. V.
2015-02-03
A file repository for calculations of cross sections and kinematic distributions using Monte Carlo generators for high-energy collisions is discussed. The repository is used to facilitate effective preservation and archiving of data from theoretical calculations and for comparisons with experimental data. The HepSim data library is publicly accessible and includes a number of Monte Carlo event samples with Standard Model predictions for current and future experiments. The HepSim project includes a software package to automate the process of downloading and viewing online Monte Carlo event samples. Data streaming over a network for end-user analysis is discussed.
A Walk on the Wild Side: The Impact of Music on Risk-Taking Likelihood.
Enström, Rickard; Schmaltz, Rodney
2017-01-01
From a marketing perspective, there has been substantial interest in on the role of risk-perception on consumer behavior. Specific 'problem music' like rap and heavy metal has long been associated with delinquent behavior, including violence, drug use, and promiscuous sex. Although individuals' risk preferences have been investigated across a range of decision-making situations, there has been little empirical work demonstrating the direct role music may have on the likelihood of engaging in risky activities. In the exploratory study reported here, we assessed the impact of listening to different styles of music while assessing risk-taking likelihood through a psychometric scale. Risk-taking likelihood was measured across ethical, financial, health and safety, recreational and social domains. Through the means of a canonical correlation analysis, the multivariate relationship between different music styles and individual risk-taking likelihood across the different domains is discussed. Our results indicate that listening to different types of music does influence risk-taking likelihood, though not in areas of health and safety.
A Walk on the Wild Side: The Impact of Music on Risk-Taking Likelihood
Enström, Rickard; Schmaltz, Rodney
2017-01-01
From a marketing perspective, there has been substantial interest in on the role of risk-perception on consumer behavior. Specific ‘problem music’ like rap and heavy metal has long been associated with delinquent behavior, including violence, drug use, and promiscuous sex. Although individuals’ risk preferences have been investigated across a range of decision-making situations, there has been little empirical work demonstrating the direct role music may have on the likelihood of engaging in risky activities. In the exploratory study reported here, we assessed the impact of listening to different styles of music while assessing risk-taking likelihood through a psychometric scale. Risk-taking likelihood was measured across ethical, financial, health and safety, recreational and social domains. Through the means of a canonical correlation analysis, the multivariate relationship between different music styles and individual risk-taking likelihood across the different domains is discussed. Our results indicate that listening to different types of music does influence risk-taking likelihood, though not in areas of health and safety. PMID:28539908
Tests for detecting overdispersion in models with measurement error in covariates.
Yang, Yingsi; Wong, Man Yu
2015-11-30
Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.
Effect of lag time distribution on the lag phase of bacterial growth - a Monte Carlo analysis
USDA-ARS?s Scientific Manuscript database
The objective of this study is to use Monte Carlo simulation to evaluate the effect of lag time distribution of individual bacterial cells incubated under isothermal conditions on the development of lag phase. The growth of bacterial cells of the same initial concentration and mean lag phase durati...
SABRINA: an interactive solid geometry modeling program for Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.
SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.
First-Order or Second-Order Kinetics? A Monte Carlo Answer
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2005-01-01
Monte Carlo computational experiments reveal that the ability to discriminate between first- and second-order kinetics from least-squares analysis of time-dependent concentration data is better than implied in earlier discussions of the problem. The problem is rendered as simple as possible by assuming that the order must be either 1 or 2 and that…
A general methodology for maximum likelihood inference from band-recovery data
Conroy, M.J.; Williams, B.K.
1984-01-01
A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.
Probabilistic structural analysis using a general purpose finite element program
NASA Astrophysics Data System (ADS)
Riha, D. S.; Millwater, H. R.; Thacker, B. H.
1992-07-01
This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.
Robbins, L G
2000-01-01
Graduate school programs in genetics have become so full that courses in statistics have often been eliminated. In addition, typical introductory statistics courses for the "statistics user" rather than the nascent statistician are laden with methods for analysis of measured variables while genetic data are most often discrete numbers. These courses are often seen by students and genetics professors alike as largely irrelevant cookbook courses. The powerful methods of likelihood analysis, although commonly employed in human genetics, are much less often used in other areas of genetics, even though current computational tools make this approach readily accessible. This article introduces the MLIKELY.PAS computer program and the logic of do-it-yourself maximum-likelihood statistics. The program itself, course materials, and expanded discussions of some examples that are only summarized here are available at http://www.unisi. it/ricerca/dip/bio_evol/sitomlikely/mlikely.h tml. PMID:10628965
Prioritizing conservation investments for mammal species globally
Wilson, Kerrie A.; Evans, Megan C.; Di Marco, Moreno; Green, David C.; Boitani, Luigi; Possingham, Hugh P.; Chiozza, Federica; Rondinini, Carlo
2011-01-01
We need to set priorities for conservation because we cannot do everything, everywhere, at the same time. We determined priority areas for investment in threat abatement actions, in both a cost-effective and spatially and temporally explicit way, for the threatened mammals of the world. Our analysis presents the first fine-resolution prioritization analysis for mammals at a global scale that accounts for the risk of habitat loss, the actions required to abate this risk, the costs of these actions and the likelihood of investment success. We evaluated the likelihood of success of investments using information on the past frequency and duration of legislative effectiveness at a country scale. The establishment of new protected areas was the action receiving the greatest investment, while restoration was never chosen. The resolution of the analysis and the incorporation of likelihood of success made little difference to this result, but affected the spatial location of these investments. PMID:21844046
An analysis of crash likelihood : age versus driving experience
DOT National Transportation Integrated Search
1995-05-01
The study was designed to determine the crash likelihood of drivers in Michigan as a function of two independent variables: driver age and driving experience. The age variable had eight levels (18, 19, 20, 21, 22, 23, 24, and 25 years old) and the ex...
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2013-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
A linear stability analysis for nonlinear, grey, thermal radiative transfer problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan B., E-mail: wollaber@lanl.go; Larsen, Edward W., E-mail: edlarsen@umich.ed
2011-02-20
We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, wemore » also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.« less
NASA Astrophysics Data System (ADS)
César Mansur Filho, Júlio; Dickman, Ronald
2011-05-01
We study symmetric sleepy random walkers, a model exhibiting an absorbing-state phase transition in the conserved directed percolation (CDP) universality class. Unlike most examples of this class studied previously, this model possesses a continuously variable control parameter, facilitating analysis of critical properties. We study the model using two complementary approaches: analysis of the numerically exact quasistationary (QS) probability distribution on rings of up to 22 sites, and Monte Carlo simulation of systems of up to 32 000 sites. The resulting estimates for critical exponents β, \\beta /\
Silverman, Merav H.; Jedd, Kelly; Luciana, Monica
2015-01-01
Behavioral responses to, and the neural processing of, rewards change dramatically during adolescence and may contribute to observed increases in risk-taking during this developmental period. Functional MRI (fMRI) studies suggest differences between adolescents and adults in neural activation during reward processing, but findings are contradictory, and effects have been found in non-predicted directions. The current study uses an activation likelihood estimation (ALE) approach for quantitative meta-analysis of functional neuroimaging studies to: 1) confirm the network of brain regions involved in adolescents’ reward processing, 2) identify regions involved in specific stages (anticipation, outcome) and valence (positive, negative) of reward processing, and 3) identify differences in activation likelihood between adolescent and adult reward-related brain activation. Results reveal a subcortical network of brain regions involved in adolescent reward processing similar to that found in adults with major hubs including the ventral and dorsal striatum, insula, and posterior cingulate cortex (PCC). Contrast analyses find that adolescents exhibit greater likelihood of activation in the insula while processing anticipation relative to outcome and greater likelihood of activation in the putamen and amygdala during outcome relative to anticipation. While processing positive compared to negative valence, adolescents show increased likelihood for activation in the posterior cingulate cortex (PCC) and ventral striatum. Contrasting adolescent reward processing with the existing ALE of adult reward processing (Liu et al., 2011) reveals increased likelihood for activation in limbic, frontolimbic, and striatal regions in adolescents compared with adults. Unlike adolescents, adults also activate executive control regions of the frontal and parietal lobes. These findings support hypothesized elevations in motivated activity during adolescence. PMID:26254587
Dual-Particle Imaging System with Neutron Spectroscopy for Safeguard Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamel, Michael C.; Weber, Thomas M.
2017-11-01
A dual-particle imager (DPI) has been designed that is capable of detecting gamma-ray and neutron signatures from shielded SNM. The system combines liquid organic and NaI(Tl) scintillators to form a combined Compton and neutron scatter camera. Effective image reconstruction of detected particles is a crucial component for maximizing the performance of the system; however, a key deficiency exists in the widely used iterative list-mode maximum-likelihood estimation-maximization (MLEM) image reconstruction technique. For MLEM a stopping condition is required to achieve a good quality solution but these conditions fail to achieve maximum image quality. Stochastic origin ensembles (SOE) imaging is a goodmore » candidate to address this problem as it uses Markov chain Monte Carlo to reach a stochastic steady-state solution. The application of SOE to the DPI is presented in this work.« less
Akazawa, K; Nakamura, T; Moriguchi, S; Shimada, M; Nose, Y
1991-07-01
Small sample properties of the maximum partial likelihood estimates for Cox's proportional hazards model depend on the sample size, the true values of regression coefficients, covariate structure, censoring pattern and possibly baseline hazard functions. Therefore, it would be difficult to construct a formula or table to calculate the exact power of a statistical test for the treatment effect in any specific clinical trial. The simulation program, written in SAS/IML, described in this paper uses Monte-Carlo methods to provide estimates of the exact power for Cox's proportional hazards model. For illustrative purposes, the program was applied to real data obtained from a clinical trial performed in Japan. Since the program does not assume any specific function for the baseline hazard, it is, in principle, applicable to any censored survival data as long as they follow Cox's proportional hazards model.
Lee, Eunjee; Zhu, Hongtu; Kong, Dehan; Wang, Yalin; Giovanello, Kelly Sullivan; Ibrahim, Joseph G
2015-01-01
The aim of this paper is to develop a Bayesian functional linear Cox regression model (BFLCRM) with both functional and scalar covariates. This new development is motivated by establishing the likelihood of conversion to Alzheimer’s disease (AD) in 346 patients with mild cognitive impairment (MCI) enrolled in the Alzheimer’s Disease Neuroimaging Initiative 1 (ADNI-1) and the early markers of conversion. These 346 MCI patients were followed over 48 months, with 161 MCI participants progressing to AD at 48 months. The functional linear Cox regression model was used to establish that functional covariates including hippocampus surface morphology and scalar covariates including brain MRI volumes, cognitive performance (ADAS-Cog), and APOE status can accurately predict time to onset of AD. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. A simulation study is performed to evaluate the finite sample performance of BFLCRM. PMID:26900412
Channel Training for Analog FDD Repeaters: Optimal Estimators and Cramér-Rao Bounds
NASA Astrophysics Data System (ADS)
Wesemann, Stefan; Marzetta, Thomas L.
2017-12-01
For frequency division duplex channels, a simple pilot loop-back procedure has been proposed that allows the estimation of the UL & DL channels at an antenna array without relying on any digital signal processing at the terminal side. For this scheme, we derive the maximum likelihood (ML) estimators for the UL & DL channel subspaces, formulate the corresponding Cram\\'er-Rao bounds and show the asymptotic efficiency of both (SVD-based) estimators by means of Monte Carlo simulations. In addition, we illustrate how to compute the underlying (rank-1) SVD with quadratic time complexity by employing the power iteration method. To enable power control for the data transmission, knowledge of the channel gains is needed. Assuming that the UL & DL channels have on average the same gain, we formulate the ML estimator for the channel norm, and illustrate its robustness against strong noise by means of simulations.
Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki
2017-01-01
This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974
A goodness-of-fit test for capture-recapture model M(t) under closure
Stanley, T.R.; Burnham, K.P.
1999-01-01
A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.
NASA Astrophysics Data System (ADS)
Demianski, Marek; Piedipalumbo, Ester; Sawant, Disha; Amati, Lorenzo
2017-02-01
Context. Explaining the accelerated expansion of the Universe is one of the fundamental challenges in physics today. Cosmography provides information about the evolution of the universe derived from measured distances, assuming only that the space time geometry is described by the Friedman-Lemaitre-Robertson-Walker metric, and adopting an approach that effectively uses only Taylor expansions of basic observables. Aims: We perform a high-redshift analysis to constrain the cosmographic expansion up to the fifth order. It is based on the Union2 type Ia supernovae data set, the gamma-ray burst Hubble diagram, a data set of 28 independent measurements of the Hubble parameter, baryon acoustic oscillations measurements from galaxy clustering and the Lyman-α forest in the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS), and some Gaussian priors on h and ΩM. Methods: We performed a statistical analysis and explored the probability distributions of the cosmographic parameters. By building up their regions of confidence, we maximized our likelihood function using the Markov chain Monte Carlo method. Results: Our high-redshift analysis confirms that the expansion of the Universe currently accelerates; the estimation of the jerk parameter indicates a possible deviation from the standard ΛCDM cosmological model. Moreover, we investigate implications of our results for the reconstruction of the dark energy equation of state (EOS) by comparing the standard technique of cosmography with an alternative approach based on generalized Padé approximations of the same observables. Because these expansions converge better, is possible to improve the constraints on the cosmographic parameters and also on the dark matter EOS. Conclusions: The estimation of the jerk and the DE parameters indicates at 1σ a possible deviation from the ΛCDM cosmological model.
A Generalized Framework for Non-Stationary Extreme Value Analysis
NASA Astrophysics Data System (ADS)
Ragno, E.; Cheng, L.; Sadegh, M.; AghaKouchak, A.
2017-12-01
Empirical trends in climate variables including precipitation, temperature, snow-water equivalent at regional to continental scales are evidence of changes in climate over time. The evolving climate conditions and human activity-related factors such as urbanization and population growth can exert further changes in weather and climate extremes. As a result, the scientific community faces an increasing demand for updated appraisal of the time-varying climate extremes. The purpose of this study is to offer a robust and flexible statistical tool for non-stationary extreme value analysis which can better characterize the severity and likelihood of extreme climatic variables. This is critical to ensure a more resilient environment in a changing climate. Following the positive feedback on the first version of Non-Stationary Extreme Value Analysis (NEVA) Toolbox by Cheng at al. 2014, we present an improved version, i.e. NEVA2.0. The upgraded version herein builds upon a newly-developed hybrid evolution Markov Chain Monte Carlo (MCMC) approach for numerical parameters estimation and uncertainty assessment. This addition leads to a more robust uncertainty estimates of return levels, return periods, and risks of climatic extremes under both stationary and non-stationary assumptions. Moreover, NEVA2.0 is flexible in incorporating any user-specified covariate other than the default time-covariate (e.g., CO2 emissions, large scale climatic oscillation patterns). The new feature will allow users to examine non-stationarity of extremes induced by physical conditions that underlie the extreme events (e.g. antecedent soil moisture deficit, large-scale climatic teleconnections, urbanization). In addition, the new version offers an option to generate stationary and/or non-stationary rainfall Intensity - Duration - Frequency (IDF) curves that are widely used for risk assessment and infrastructure design. Finally, a Graphical User Interface (GUI) of the package is provided, making NEVA accessible to a broader audience.
Janssen, Eva; van Osch, Liesbeth; Lechner, Lilian; Candel, Math; de Vries, Hein
2012-01-01
Despite the increased recognition of affect in guiding probability estimates, perceived risk has been mainly operationalised in a cognitive way and the differentiation between rational and intuitive judgements is largely unexplored. This study investigated the validity of a measurement instrument differentiating cognitive and affective probability beliefs and examined whether behavioural decision making is mainly guided by cognition or affect. Data were obtained from four surveys focusing on smoking (N=268), fruit consumption (N=989), sunbed use (N=251) and sun protection (N=858). Correlational analyses showed that affective likelihood was more strongly correlated with worry compared to cognitive likelihood and confirmatory factor analysis provided support for a two-factor model of perceived likelihood instead of a one-factor model (i.e. cognition and affect combined). Furthermore, affective likelihood was significantly associated with the various outcome variables, whereas the association for cognitive likelihood was absent in three studies. The findings provide support for the construct validity of the measures used to assess cognitive and affective likelihood. Since affective likelihood might be a better predictor of health behaviour than the commonly used cognitive operationalisation, both dimensions should be considered in future research.
ERIC Educational Resources Information Center
Kwok, Oi-man; West, Stephen G.; Green, Samuel B.
2007-01-01
This Monte Carlo study examined the impact of misspecifying the [big sum] matrix in longitudinal data analysis under both the multilevel model and mixed model frameworks. Under the multilevel model approach, under-specification and general-misspecification of the [big sum] matrix usually resulted in overestimation of the variances of the random…
NASA Technical Reports Server (NTRS)
Gallis, Michael A.; LeBeau, Gerald J.; Boyles, Katie A.
2003-01-01
The Direct Simulation Monte Carlo method was used to provide 3-D simulations of the early entry phase of the Shuttle Orbiter. Undamaged and damaged scenarios were modeled to provide calibration points for engineering "bridging function" type of analysis. Currently the simulation technology (software and hardware) are mature enough to allow realistic simulations of three dimensional vehicles.
Exact and Monte carlo resampling procedures for the Wilcoxon-Mann-Whitney and Kruskal-Wallis tests.
Berry, K J; Mielke, P W
2000-12-01
Exact and Monte Carlo resampling FORTRAN programs are described for the Wilcoxon-Mann-Whitney rank sum test and the Kruskal-Wallis one-way analysis of variance for ranks test. The program algorithms compensate for tied values and do not depend on asymptotic approximations for probability values, unlike most algorithms contained in PC-based statistical software packages.
Stochastic Analysis of Orbital Lifetimes of Spacecraft
NASA Technical Reports Server (NTRS)
Sasamoto, Washito; Goodliff, Kandyce; Cornelius, David
2008-01-01
A document discusses (1) a Monte-Carlo-based methodology for probabilistic prediction and analysis of orbital lifetimes of spacecraft and (2) Orbital Lifetime Monte Carlo (OLMC)--a Fortran computer program, consisting of a previously developed long-term orbit-propagator integrated with a Monte Carlo engine. OLMC enables modeling of variances of key physical parameters that affect orbital lifetimes through the use of probability distributions. These parameters include altitude, speed, and flight-path angle at insertion into orbit; solar flux; and launch delays. The products of OLMC are predicted lifetimes (durations above specified minimum altitudes) for the number of user-specified cases. Histograms generated from such predictions can be used to determine the probabilities that spacecraft will satisfy lifetime requirements. The document discusses uncertainties that affect modeling of orbital lifetimes. Issues of repeatability, smoothness of distributions, and code run time are considered for the purpose of establishing values of code-specific parameters and number of Monte Carlo runs. Results from test cases are interpreted as demonstrating that solar-flux predictions are primary sources of variations in predicted lifetimes. Therefore, it is concluded, multiple sets of predictions should be utilized to fully characterize the lifetime range of a spacecraft.
On Muthen's Maximum Likelihood for Two-Level Covariance Structure Models
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Hayashi, Kentaro
2005-01-01
Data in social and behavioral sciences are often hierarchically organized. Special statistical procedures that take into account the dependence of such observations have been developed. Among procedures for 2-level covariance structure analysis, Muthen's maximum likelihood (MUML) has the advantage of easier computation and faster convergence. When…
Mixture Rasch Models with Joint Maximum Likelihood Estimation
ERIC Educational Resources Information Center
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources
NASA Astrophysics Data System (ADS)
Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi
2017-01-01
Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.
NASA Technical Reports Server (NTRS)
Thanedar, B. D.
1972-01-01
A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.
Electrosorption of a modified electrode in the vicinity of phase transition: A Monte Carlo study
NASA Astrophysics Data System (ADS)
Gavilán Arriazu, E. M.; Pinto, O. A.
2018-03-01
We present a Monte Carlo study for the electrosorption of an electroactive species on a modified electrode. The surface of the electrode is modified by the irreversible adsorption of a non-electroactive species which is able to block a percentage of the adsorption sites. This generates an electrode with variable connectivity sites. A second species, electroactive in this case, is adsorbed in surface vacancies and can interact repulsively with itself. In particular, we are interested in the analysis of the effect of the non-electroactive species near of critical regime, where the c(2 × 2) structure is formed. Lattice-gas models and Monte Carlo simulations in the Gran Canonical Ensemble are used. The analysis conducted is based on the study of voltammograms, order parameters, isotherms, configurational entropy per site, at several values of energies and coverage degrees of the non-electroactive species.
MUSiC - A general search for deviations from monte carlo predictions in CMS
NASA Astrophysics Data System (ADS)
Biallass, Philipp A.; CMS Collaboration
2009-06-01
A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.
MUSiC - A Generic Search for Deviations from Monte Carlo Predictions in CMS
NASA Astrophysics Data System (ADS)
Hof, Carsten
2009-05-01
We present a model independent analysis approach, systematically scanning the data for deviations from the Standard Model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm.
Acoustic localization of triggered lightning
NASA Astrophysics Data System (ADS)
Arechiga, Rene O.; Johnson, Jeffrey B.; Edens, Harald E.; Thomas, Ronald J.; Rison, William
2011-05-01
We use acoustic (3.3-500 Hz) arrays to locate local (<20 km) thunder produced by triggered lightning in the Magdalena Mountains of central New Mexico. The locations of the thunder sources are determined by the array back azimuth and the elapsed time since discharge of the lightning flash. We compare the acoustic source locations with those obtained by the Lightning Mapping Array (LMA) from Langmuir Laboratory, which is capable of accurately locating the lightning channels. To estimate the location accuracy of the acoustic array we performed Monte Carlo simulations and measured the distance (nearest neighbors) between acoustic and LMA sources. For close sources (<5 km) the mean nearest-neighbors distance was 185 m compared to 100 m predicted by the Monte Carlo analysis. For far distances (>6 km) the error increases to 800 m for the nearest neighbors and 650 m for the Monte Carlo analysis. This work shows that thunder sources can be accurately located using acoustic signals.
Maximum likelihood solution for inclination-only data in paleomagnetism
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2010-08-01
We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.
A Primer on Risks, Issues and Opportunities
2016-08-01
likelihood or consequence. A risk has three main parts: a future root cause, a likelihood and a consequence. The future root cause is determined...through root cause analysis, which is the most important part of any risk management effort. SPECIAL SECTION: RISK MANAGEMENT Defense AT&L: July-August...2016 10 Root cause analysis gets to the heart of the risk. Why does the risk exist? What is its nature? How will the risk occur? What should be
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudhyadhom, A; McGuinness, C; Descovich, M
Purpose: To develop a methodology for validation of a Monte-Carlo dose calculation model for robotic small field SRS/SBRT deliveries. Methods: In a robotic treatment planning system, a Monte-Carlo model was iteratively optimized to match with beam data. A two-part analysis was developed to verify this model. 1) The Monte-Carlo model was validated in a simulated water phantom versus a Ray-Tracing calculation on a single beam collimator-by-collimator calculation. 2) The Monte-Carlo model was validated to be accurate in the most challenging situation, lung, by acquiring in-phantom measurements. A plan was created and delivered in a CIRS lung phantom with film insert.more » Separately, plans were delivered in an in-house created lung phantom with a PinPoint chamber insert within a lung simulating material. For medium to large collimator sizes, a single beam was delivered to the phantom. For small size collimators (10, 12.5, and 15mm), a robotically delivered plan was created to generate a uniform dose field of irradiation over a 2×2cm{sup 2} area. Results: Dose differences in simulated water between Ray-Tracing and Monte-Carlo were all within 1% at dmax and deeper. Maximum dose differences occurred prior to dmax but were all within 3%. Film measurements in a lung phantom show high correspondence of over 95% gamma at the 2%/2mm level for Monte-Carlo. Ion chamber measurements for collimator sizes of 12.5mm and above were within 3% of Monte-Carlo calculated values. Uniform irradiation involving the 10mm collimator resulted in a dose difference of ∼8% for both Monte-Carlo and Ray-Tracing indicating that there may be limitations with the dose calculation. Conclusion: We have developed a methodology to validate a Monte-Carlo model by verifying that it matches in water and, separately, that it corresponds well in lung simulating materials. The Monte-Carlo model and algorithm tested may have more limited accuracy for 10mm fields and smaller.« less
Maximum likelihood resampling of noisy, spatially correlated data
NASA Astrophysics Data System (ADS)
Goff, J.; Jenkins, C.
2005-12-01
In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application, which runs the risk of erasing high variability components of the field in addition to the noise components. We present here an alternative to filtering: a newly developed methodology for correcting noise in data by finding the "best" value given the data value, its uncertainty, and the data values and uncertainties at proximal locations. The motivating rationale is that data points that are close to each other in space cannot differ by "too much", where how much is "too much" is governed by the field correlation properties. Data with large uncertainties will frequently violate this condition, and in such cases need to be corrected, or "resampled." The best solution for resampling is determined by the maximum of the likelihood function defined by the intersection of two probability density functions (pdf): (1) the data pdf, with mean and variance determined by the data value and square uncertainty, respectively, and (2) the geostatistical pdf, whose mean and variance are determined by the kriging algorithm applied to proximal data values. A Monte Carlo sampling of the data probability space eliminates non-uniqueness, and weights the solution toward data values with lower uncertainties. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum likelihood resampling algorithm. The method is also applied to three marine geology/geophysics data examples: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) sidescan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckly noise.
Iterative Monte Carlo analysis of spin-dependent parton distributions
Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; ...
2016-04-05
We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d 2 moment of the nucleon within a global PDF analysis.« less
The structure of liquid water by polarized neutron diffraction and reverse Monte Carlo modelling.
Temleitner, László; Pusztai, László; Schweika, Werner
2007-08-22
The coherent static structure factor of water has been investigated by polarized neutron diffraction. Polarization analysis allows us to separate the huge incoherent scattering background from hydrogen and to obtain high quality data of the coherent scattering from four different mixtures of liquid H(2)O and D(2)O. The information obtained by the variation of the scattering contrast confines the configurational space of water and is used by the reverse Monte Carlo technique to model the total structure factors. Structural characteristics have been calculated directly from the resulting sets of particle coordinates. Consistency with existing partial pair correlation functions, derived without the application of polarized neutrons, was checked by incorporating them into our reverse Monte Carlo calculations. We also performed Monte Carlo simulations of a hard sphere system, which provides an accurate estimate of the information content of the measured data. It is shown that the present combination of polarized neutron scattering and reverse Monte Carlo structural modelling is a promising approach towards a detailed understanding of the microscopic structure of water.
NASA Astrophysics Data System (ADS)
Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei
2018-01-01
In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abe, M.; Abu-Zayyad, T.; Allen, M.; Azuma, R.; Barcikowski, E.; Belz, J. W.; Bergman, D. R.; Blake, S. A.; Cady, R.; Cheon, B. G.; Chiba, J.; Chikawa, M.; di Matteo, A.; Fujii, T.; Fujita, K.; Fukushima, M.; Furlich, G.; Goto, T.; Hanlon, W.; Hayashi, M.; Hayashi, Y.; Hayashida, N.; Hibino, K.; Honda, K.; Ikeda, D.; Inoue, N.; Ishii, T.; Ishimori, R.; Ito, H.; Ivanov, D.; Jeong, H. M.; Jeong, S. M.; Jui, C. C. H.; Kadota, K.; Kakimoto, F.; Kalashev, O.; Kasahara, K.; Kawai, H.; Kawakami, S.; Kawana, S.; Kawata, K.; Kido, E.; Kim, H. B.; Kim, J. H.; Kim, J. H.; Kishigami, S.; Kitamura, S.; Kitamura, Y.; Kuzmin, V.; Kuznetsov, M.; Kwon, Y. J.; Lee, K. H.; Lubsandorzhiev, B.; Lundquist, J. P.; Machida, K.; Martens, K.; Matsuyama, T.; Matthews, J. N.; Mayta, R.; Minamino, M.; Mukai, K.; Myers, I.; Nagasawa, K.; Nagataki, S.; Nakamura, R.; Nakamura, T.; Nonaka, T.; Oda, H.; Ogio, S.; Ogura, J.; Ohnishi, M.; Ohoka, H.; Okuda, T.; Omura, Y.; Ono, M.; Onogi, R.; Oshima, A.; Ozawa, S.; Park, I. H.; Pshirkov, M. S.; Rodriguez, D. C.; Rubtsov, G.; Ryu, D.; Sagawa, H.; Sahara, R.; Saito, K.; Saito, Y.; Sakaki, N.; Sakurai, N.; Scott, L. M.; Seki, T.; Sekino, K.; Shah, P. D.; Shibata, F.; Shibata, T.; Shimodaira, H.; Shin, B. K.; Shin, H. S.; Smith, J. D.; Sokolsky, P.; Stokes, B. T.; Stratton, S. R.; Stroman, T. A.; Suzawa, T.; Takagi, Y.; Takahashi, Y.; Takamura, M.; Takeda, M.; Takeishi, R.; Taketa, A.; Takita, M.; Tameda, Y.; Tanaka, H.; Tanaka, K.; Tanaka, M.; Thomas, S. B.; Thomson, G. B.; Tinyakov, P.; Tkachev, I.; Tokuno, H.; Tomida, T.; Troitsky, S.; Tsunesada, Y.; Tsutsumi, K.; Uchihori, Y.; Udo, S.; Urban, F.; Wong, T.; Yamamoto, M.; Yamane, R.; Yamaoka, H.; Yamazaki, K.; Yang, J.; Yashiro, K.; Yoneda, Y.; Yoshida, S.; Yoshii, H.; Zhezher, Y.; Zundel, Z.; Telescope Array Collaboration
2018-05-01
The Telescope Array (TA) observatory utilizes fluorescence detectors and surface detectors (SDs) to observe air showers produced by ultra high energy cosmic rays in Earth’s atmosphere. Cosmic-ray events observed in this way are termed hybrid data. The depth of air shower maximum is related to the mass of the primary particle that generates the shower. This paper reports on shower maxima data collected over 8.5 yr using the Black Rock Mesa and Long Ridge fluorescence detectors in conjunction with the array of SDs. We compare the means and standard deviations of the observed {X}\\max distributions with Monte Carlo {X}\\max distributions of unmixed protons, helium, nitrogen, and iron, all generated using the QGSJet II-04 hadronic model. We also perform an unbinned maximum likelihood test of the observed data, which is subjected to variable systematic shifting of the data {X}\\max distributions to allow us to test the full distributions, and compare them to the Monte Carlo to see which elements are not compatible with the observed data. For all energy bins, QGSJet II-04 protons are found to be compatible with TA hybrid data at the 95% confidence level after some systematic {X}\\max shifting of the data. Three other QGSJet II-04 elements are found to be compatible using the same test procedure in an energy range limited to the highest energies where data statistics are sparse.
Multilevel Sequential2 Monte Carlo for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Latz, Jonas; Papaioannou, Iason; Ullmann, Elisabeth
2018-09-01
The identification of parameters in mathematical models using noisy observations is a common task in uncertainty quantification. We employ the framework of Bayesian inversion: we combine monitoring and observational data with prior information to estimate the posterior distribution of a parameter. Specifically, we are interested in the distribution of a diffusion coefficient of an elliptic PDE. In this setting, the sample space is high-dimensional, and each sample of the PDE solution is expensive. To address these issues we propose and analyse a novel Sequential Monte Carlo (SMC) sampler for the approximation of the posterior distribution. Classical, single-level SMC constructs a sequence of measures, starting with the prior distribution, and finishing with the posterior distribution. The intermediate measures arise from a tempering of the likelihood, or, equivalently, a rescaling of the noise. The resolution of the PDE discretisation is fixed. In contrast, our estimator employs a hierarchy of PDE discretisations to decrease the computational cost. We construct a sequence of intermediate measures by decreasing the temperature or by increasing the discretisation level at the same time. This idea builds on and generalises the multi-resolution sampler proposed in P.S. Koutsourelakis (2009) [33] where a bridging scheme is used to transfer samples from coarse to fine discretisation levels. Importantly, our choice between tempering and bridging is fully adaptive. We present numerical experiments in 2D space, comparing our estimator to single-level SMC and the multi-resolution sampler.
Mapping flood hazards under uncertainty through probabilistic flood inundation maps
NASA Astrophysics Data System (ADS)
Stephens, T.; Bledsoe, B. P.; Miller, A. J.; Lee, G.
2017-12-01
Changing precipitation, rapid urbanization, and population growth interact to create unprecedented challenges for flood mitigation and management. Standard methods for estimating risk from flood inundation maps generally involve simulations of floodplain hydraulics for an established regulatory discharge of specified frequency. Hydraulic model results are then geospatially mapped and depicted as a discrete boundary of flood extents and a binary representation of the probability of inundation (in or out) that is assumed constant over a project's lifetime. Consequently, existing methods utilized to define flood hazards and assess risk management are hindered by deterministic approaches that assume stationarity in a nonstationary world, failing to account for spatio-temporal variability of climate and land use as they translate to hydraulic models. This presentation outlines novel techniques for portraying flood hazards and the results of multiple flood inundation maps spanning hydroclimatic regions. Flood inundation maps generated through modeling of floodplain hydraulics are probabilistic reflecting uncertainty quantified through Monte-Carlo analyses of model inputs and parameters under current and future scenarios. The likelihood of inundation and range of variability in flood extents resulting from Monte-Carlo simulations are then compared with deterministic evaluations of flood hazards from current regulatory flood hazard maps. By facilitating alternative approaches of portraying flood hazards, the novel techniques described in this presentation can contribute to a shifting paradigm in flood management that acknowledges the inherent uncertainty in model estimates and the nonstationary behavior of land use and climate.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
Outcome-Driven Cluster Analysis with Application to Microarray Data.
Hsu, Jessie J; Finkelstein, Dianne M; Schoenfeld, David A
2015-01-01
One goal of cluster analysis is to sort characteristics into groups (clusters) so that those in the same group are more highly correlated to each other than they are to those in other groups. An example is the search for groups of genes whose expression of RNA is correlated in a population of patients. These genes would be of greater interest if their common level of RNA expression were additionally predictive of the clinical outcome. This issue arose in the context of a study of trauma patients on whom RNA samples were available. The question of interest was whether there were groups of genes that were behaving similarly, and whether each gene in the cluster would have a similar effect on who would recover. For this, we develop an algorithm to simultaneously assign characteristics (genes) into groups of highly correlated genes that have the same effect on the outcome (recovery). We propose a random effects model where the genes within each group (cluster) equal the sum of a random effect, specific to the observation and cluster, and an independent error term. The outcome variable is a linear combination of the random effects of each cluster. To fit the model, we implement a Markov chain Monte Carlo algorithm based on the likelihood of the observed data. We evaluate the effect of including outcome in the model through simulation studies and describe a strategy for prediction. These methods are applied to trauma data from the Inflammation and Host Response to Injury research program, revealing a clustering of the genes that are informed by the recovery outcome.
2013-01-01
Background Whereas the prognosis of second kidney transplant recipients (STR) compared to the first ones has been frequently analyzed, no study has addressed the issue of comparing the risk factor effects on graft failure between both groups. Methods Here, we propose two alternative strategies to study the heterogeneity of risk factors between two groups of patients: (i) a multiplicative-regression model for relative survival (MRS) and (ii) a stratified Cox model (SCM) specifying the graft rank as strata and assuming subvectors of the explicatives variables. These developments were motivated by the analysis of factors associated with time to graft failure (return-to-dialysis or patient death) in second kidney transplant recipients (STR) compared to the first ones. Estimation of the parameters was based on partial likelihood maximization. Monte-Carlo simulations associated with bootstrap re-sampling was performed to calculate the standard deviations for the MRS. Results We demonstrate, for the first time in renal transplantation, that: (i) male donor gender is a specific risk factor for STR, (ii) the adverse effect of recipient age is enhanced for STR and (iii) the graft failure risk related to donor age is attenuated for STR. Conclusion While the traditional Cox model did not provide original results based on the renal transplantation literature, the proposed relative and stratified models revealed new findings that are useful for clinicians. These methodologies may be of interest in other medical fields when the principal objective is the comparison of risk factors between two populations. PMID:23915191
Zhang, Min; Jia, Dijing; Li, Hanping; Gui, Tao; Jia, Lei; Wang, Xiaolin; Li, Tianyi; Liu, Yongjian; Bao, Zuoyi; Liu, Siyang; Zhuang, Daomin; Li, Jingyun; Li, Lin
2017-10-01
CRF07_BC was originally formed in Yunnan province of China in 1980s and spread quickly in injecting drug users (IDUs). In recent years, it has been introduced into men who have sex with men (MSM) and become the most dominant strain in China. In this study, we performed a comprehensively phylodynamic analysis of CRF07_BC sequences from China. All CRF07_BC sequences identified in China were retrieved from database. More sequences obtained in our laboratory were added to make the dataset more representative. A maximum-likelihood (ML) tree was constructed with PhyML3.0. Maximum clade credibility (MCC) tree and effective population size were predicted by using Markov Chains Monte Carlo sampling method with Beast software. A total of 610 CRF07_BC sequences coving 1,473 bp of the gag gene (from 817 to 2,289 according to HXB2 calculator) were included into the dataset. Three epidemic clusters were identified; two clusters comprised sequences from IDUs, while one cluster mainly contained sequences from MSMs. The time of the most recent common ancestor of clusters that composed of sequences from MSMs was estimated to be in 2000. Two rapid spreading waves of effective population size of CRF07_BC infections were identified in the skyline plot. The second wave coincided with the expanding of MSM cluster. The results indicated that the control of CRF07_BC infections in MSMs would help to decrease its epidemic in China.
Fission matrix-based Monte Carlo criticality analysis of fuel storage pools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farlotti, M.; Ecole Polytechnique, Palaiseau, F 91128; Larsen, E. W.
2013-07-01
Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simplemore » problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)« less
Ma, Chunming; Liu, Yue; Lu, Qiang; Lu, Na; Liu, Xiaoli; Tian, Yiming; Wang, Rui; Yin, Fuzai
2016-02-01
The blood pressure-to-height ratio (BPHR) has been shown to be an accurate index for screening hypertension in children and adolescents. The aim of the present study was to perform a meta-analysis to assess the performance of BPHR for the assessment of hypertension. Electronic and manual searches were performed to identify studies of the BPHR. After methodological quality assessment and data extraction, pooled estimates of the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, area under the receiver operating characteristic curve and summary receiver operating characteristics were assessed systematically. The extent of heterogeneity for it was assessed. Six studies were identified for analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio and diagnostic odds ratio values of BPHR, for assessment of hypertension, were 96% [95% confidence interval (CI)=0.95-0.97], 90% (95% CI=0.90-0.91), 10.68 (95% CI=8.03-14.21), 0.04 (95% CI=0.03-0.07) and 247.82 (95% CI=114.50-536.34), respectively. The area under the receiver operating characteristic curve was 0.9472. The BPHR had higher diagnostic accuracies for identifying hypertension in children and adolescents.
Accurate Structural Correlations from Maximum Likelihood Superpositions
Theobald, Douglas L; Wuttke, Deborah S
2008-01-01
The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091
E Pluribus Analysis: Applying a Superforecasting Methodology to the Detection of Homegrown Violence
2018-03-01
actor violence and a set of predefined decision-making protocols. This research included running four simulations using the Monte Carlo technique, which...actor violence and a set of predefined decision-making protocols. This research included running four simulations using the Monte Carlo technique...PREDICTING RANDOMNESS.............................................................24 1. Using a “ Runs Test” to Determine a Temporal Pattern in Lone
Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2005-01-01
In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…
Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data
ERIC Educational Resources Information Center
Xi, Nuo; Browne, Michael W.
2014-01-01
A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…
ERIC Educational Resources Information Center
Adank, Patti
2012-01-01
The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…
John Hogland; Nedret Billor; Nathaniel Anderson
2013-01-01
Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...
A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...